You are here

Blog

Integrating Opsview with ELK Log Monitoring

Hello all!
This is a brief blog post to explain how I quickly integrated my existing Opsview server with my existing ELK deployment.

ELK Monitoring in Opsview

I wanted a way that within Opsview, I can see that a host has failed or is having problems and go “Hmm, let’s have a look at the logs to see what’s happening” without:
A) Having to SSH to the box and start tailing 

B) Have to fire up ELK and start filtering
Let's take a step back just a moment, what is ELK? ELK stands for ElasticSearch / Logstash / Kibana, and essentially it uses ElasticSearch/Logstash to collect and handle the log data, and Kibana as the graphical front end through which users can create their own filters and graphs etc.
Moving on to the integration. My current setup is as follows: •    Opsview is monitoring all of my servers, network devices, virtual machines (KVM) and so forth – for items such as load average, memory usage, LVM capacity, temperatures, processes/services running, response times and so forth< •    ELK is collecting logs from all of the aforementioned devices < What I wanted to be able to do off-the-bat with ELK was use URL-based syntax to filter the ELK view, however this isn’t immediately possible out of the box it appears, so you will have to make slight modifications to the .json file (default.json or whatever your ELK view is saved as). Open up your json view (i.e. /var/www/kibana3/app/dashboards/default.json) and edit the top part to look similar to the following:

{
  "title": "ELK: Opsview host filtered",
  "services": {
    "query": {
      "list": {
        "0": {
          "query": "{{ARGS.query || '*'}}" ,
          "alias": "",
          "color": "#7EB26D",
          "id": 0,
          "pin": false,
          "type": "lucene",
          "enable": true
        }
      },
      "ids": [
        0
      ]
    },
    "filter": {
      "list": {
        "0": {
          "type": "time",
          "field": "@timestamp",
          "from": "now-{{ARGS.from || '24h'}}",
          "to": "now",
          "mandate": "must",
          "active": true,
          "alias": "",
          "id": 0
        }
      },

Essentially what we are doing is allowing URL-based querying by parsing through the ‘?q=MYFILTER’ variable straight to the query box; which wasn’t available by default. This allows us to open http://my-elk-server/default.json?q=opsview and ELK will be opened with a filter of ‘opsview’ by default. Neat huh!?

So, now that is working – test it out as above, you should get something similar to the following:

URL: http://192.168.0.38/index.html#/dashboard/file/default.json?query=host:1...

Screen:

ELK and Opsview

If not then the filtering we created and edited above isn’t working. If it is working, then proceed to the next section!

Setting up in Opsview

In Opsview we are going to use the in-built ‘Management URLs’ functionality (docs here), which allows users to create a host template i.e. ‘My Linux Template’ and create a management URL of ‘ssh://$HOSTADDRESS$:22′ for example. This allows the user to dive straight into an SSH shell on that box from within the Opsview UI, when that template is applied to a host. Cool huh? 

You can use this for anything, wiki’s, confluence, service desks, you name it – i.e. create a ‘Wiki’ host template of ‘http://wiki.internal.com/?query=$HOSTADDRESS$’ – when this is applied to a series of hosts, you will be able to load the wiki and search it for the name of the server you are looking at, from one menu option.

For our purposes, we are planning on creating an ‘ELK’ host template where we will apply to all of the host’s whose logs we’re collecting with ELK.

Step 1: Create the host template

Fairly simple, go to ‘Settings > Host Templates > Add new’ and populate it with a name and description as below:

Set up Host template for Opsview and ELK

Step 2: Create the management URL
After clicking ‘Submit changes’ you will now be able to click on the previously-greyed-out ‘Management URLs’ button. In here we will need to create our ELK link, as below:

Create management URL

For reference, the syntax is ‘http://elk-log-server/index.html#/dashboard/file/default.json?query=$HOSTADDRESS$’. The important part here is $HOSTADDRESS$ – this variable or macro will be substituted out for the address of the host, i.e. if this template is applied to ‘exchange-server-1.microsoft.com’, when the management URL is clicked on that host the full URL will be http://elk-log-server/index.html#/dashboard/file/default.json?query=exch....

Step 3: Apply the template to the hosts

Next, we will need to apply the template to the hosts whose logs we want to monitor. You can do this via ‘Host template’ tab using the hosts section, but because I’m lazy I did it via the host itself (Settings > Hosts > and then clicked on my host in question), as below:

Apply template to host

And that’s it, the template is applied.

Step 4: View the logs

After a quick reload, go to your Opsview monitoring screens and click on the contextual menu and you will now see an extra option there – ELK:

View Host Logs in Opsview Monitor

That’s pretty much it! 

Now, all you need to do is apply the ‘ELK’ host template to the hosts whose logs you are monitoring and this option will appear ^^. That way in the future, you can see ‘oh, we have a host failure’ and dive straight into the logs at the click of a button, as below:

ELK Logstash

Get unified insight into your IT operations with Opsview Monitor

webteam's picture
by Opsview Team,
Administrator
Opsview is passionately focused on monitoring that enables DevOps teams to deliver smarter business services, faster.

More like this

Apr 25, 2012
Blog
By Opsview Team, Administrator

This blog post will get you started with creating service checks on your hosts.

To register a new client into Opsview, you need to have...

Dec 05, 2013
Blog
By Opsview Team, Administrator

Think of this scenario. You have multiple common services you wish to monitor, yet you only see one service check.

Dec 12, 2012
Blog
By Opsview Team, Administrator

In this post, we’ll look at how we can monitor Solr, what performance metrics we might want to gather and how we can easily achieve this with...