You are here

Results Exporter lets Opsview 6.1 users integrate with Splunk and other analytics platforms

Using Opsview Results Exporter - Ready, Set, Splunk!

Added in Opsview Monitor 6.1, the Results Exporter component makes it easy to export your results out of Opsview Monitor for ingestion into other applications. In this blog post, I'll be exploring how you can quickly and painlessly configure your new component and begin to export to Splunk Enterprise.

To begin with, you'll need to install the new component (skip this step if you've already got your Results Exporter up and running). Installation is easy - see Opsview Knowledge Center - Results Exporter Component.

Once your component is installed, you'll need to configure it by creating and editing the file /opt/opsview/resultsexporter/etc/resultsexporter.yaml. (We provide an example file called resultsexporter.yaml.example.) To begin, as the docs detail, you'll need to copy parts of the results_queue and registry stanzas to your new resultsexporter.yaml, replacing dummy values for message queue encoder key, message queue password, and Opsview registry password (not registry root password!) with actual values for your system, autogenerated at deploy time. The values you need can be found in the file /opt/opsview/deploy/etc/user_secrets.yml. The resultsexporter.yaml skeleton you'll create will look like this:

resultsexporter:
  results_queue:
    messagequeue:
      encoder_key: dummy ## replace with value from /opt/opsview/deploy/etc/user_secrets.yml
      password: dummy    ## replace with value from /opt/opsview/deploy/etc/user_secrets.yml
  registry:
    password: dummy      ## replace with value from /opt/opsview/deploy/etc/user_secrets.yml
  outputs:
    syslog:
      ## all of our syslog outputs (if any) will go in here
    file:
      ## all of our file outputs (if any) will go in here
    http:
      ## all of our http outputs (if any) will go in here

Sample skeleton resultsexporter.yaml file with authentication overrides and empty output blocks. (/opt/opsview/resultsexporter/etc/resultsexporter.yaml)

Next, you'll add some stanzas to resultsexporter.yaml, defining outputs used to export results. For the purposes of this article, these will be fairly simple. However, be aware that Results Exporter lets each individual output have its own custom filter and field mapping (controlling which results get selected for each output, and which fields they include), among other details, all fully documented.

Also remember that any time you make changes to a Results Exporter configuration file, you need to restart the component for your changes to go into effect. This is simple - just run (as root):

/opt/opsview/watchdog/bin/opsview-monit restart opsview-resultsexporter

 

Adding an Output

This component uses the concept of an 'output' to refer to an individual path data can take out of your Opsview Monitor instance. Outputs are divided into categories (either syslog, file, or http) based on the method used. For example, if you write your results to two different files, and you export to a syslog server, you have three outputs, namely two file outputs and one syslog output. When specifying new outputs, they are added to the relevant section of the config file, as you'll see when fleshing out the resultsexporter.yaml file later.

For exporting into Splunk Enterprise, you can choose to either use syslog or http.

Exporting to Splunk via HTTP

One of the simplest ways to export is via the Splunk Event Collector for which Opsview provides out-of-the-box support with minimal required configuration. Once you've created an event collector to collect your Opsview Monitor results, all you need is the token linked to that collector.

Add your output under the http section, and give it a name. HTTP Outputs have types (pre-built configurations), created by us. Since this is going to use the built-in splunk type, you can add that under the name of the output, as seen in the sample below.

Now, to complete this output, provide all the parameters required by the type you've chosen. For an HTTP output of type splunk, this means you need to add the host (the IP address/hostname) and port (by default, 8088 - this can be changed in your Splunk event collector Global Settings) to identify your splunk server, plus the access token you created earlier. Note that in the sample file, below, the messagequeue/registry overrides (see the skeleton above) have been removed to save space.

resultsexporter:
## messagequeue/registry stuff goes here
  outputs:
    http:
       my_splunk_event_collector:
         type: splunk
         parameters:
           host: 123.123.123.123
           port: 8088
           token: 153d2c2d-0234-0dcf-f9a7-3413d96b4b2b

Sample skeleton resultsexporter.yaml file, showing complete configuration for a basic http output for Splunk via Splunk's HTTP Event Collector (HEC).

This is all you need to start sending the results into the event collector - just restart the component and watch the data flow in!

Securing your HTTP output (optional)

We've also included a second built-in type called splunk-cert, which builds on the other splunk type by also allowing you to set up a certified connection with client/server certificates.

Certificates let you ensure that your data is not vulnerable to being intercepted, giving you more trust in sending sensitive information to your Splunk instance from Opsview. Splunk recommends the use of certificates when forwarding information into Splunk Web.

If you've installed Opsview, you already have access to the Opsview PKI (Public Key Infrastructure) - meaning you can create signed keys using Opsview as a CA (Certificate Authority) and then use these to secure your communication to Splunk. Alternatively, you can use your own certificates as supported by Splunk.

Note: If using SSL certificates, please ensure that the Global Settings of your Splunk HTTP Event Collectors have SSL enabled.

Once you have the required certificates (CA certificate, client certificate and server certificate - in this example they're called ca.crt, server.pem and client.pem respectively) you can secure your output. The client and server .pem files contain both a certificate and a key, while the CA certificate only contains the certificate string.

You'll need to add the CA and server certificates to your Splunk server (They've been put in $SPLUNK_HOME/etc/auth/mycerts/ for this example) and then add the following lines to the Splunk server's $SPLUNK_HOME/etc/system/local/server.conf file:

[sslConfig]
...
sslRootCAPath = $SPLUNK_HOME/etc/auth/mycerts/ca.crt    # path to ca cert

And in $SPLUNK_HOME/etc/apps/splunk_httpinput/local/inputs.conf:

[http]
...
serverCert = $SPLUNK_HOME/etc/auth/mycerts/server.pem   # server cert
sslPassword = <password>                                # if file has password set
requireClientCert = true                                # if true, only allow verified clients

Finally, restart your Splunk service.

Next, you need to add the CA and client certificates to your Opsview server. For this example, they've been put in /opt/opsview/mycerts. Split the client certificate .pem file into its two respective parts, called client.key and client.crt in the example below.

Now you can finalise the certified http configuration - it follows the same pattern as the normal splunk type, but additionally exposes the ca certs (path to your CA certificates), keyfile (key for client certificate) and certfile (client certificate file) parameters.

resultsexporter:
## messagequeue/registry stuff goes here
  outputs:
    http:
       my_splunk_event_collector:
         type: splunk-cert
         parameters:
           host: 123.123.123.123
           port: 8088
           token: 153d2c2d-0234-0dcf-f9a7-3413d96b4b2b
           ca_certs: '/opt/opsview/mycerts/ca.crt'
           keyfile: '/opt/opsview/mycerts/client.key'
           certfile: '/opt/opsview/mycerts/client.crt'

Sample skeleton resultsexporter.yaml file, showing config for a certificate-secured connection to a Splunk HTTP Event Collector.

Restart your Results Exporter component once again for the new output to take effect!

Exporting to Splunk via Syslog

As Splunk also supports data collection via TCP and UDP ports, you can use the Results Exporter to send syslog data over those ports as well.

Once you've opened your desired port(s) following the instructions in Monitor Splunk Network Ports, all you'll need is the details of your Splunk server.

Add your output under the syslog section, and give it a name. Syslog Outputs have a variety of configuration options you can add, but as a minimum, you need to specify the host and port so that the component can send the information to the right place. If using TCP, an additional protocol option is required, as the component defaults to UDP. Finally, you can use a custom log format and log date format to change how your syslog messages are constructed:

resultsexporter:
## messagequeue/registry stuff goes here
  outputs:
    syslog:
       my_splunk_syslog:
         type: splunk
         parameters:
           host: 123.123.123.123
           port: 11000
           protocol: tcp  ## only needed if port is TCP
           log_facility: news
           log_level: notice
           log_date_format: '%Y-%m-%d %H:%M:%S'
           log_format: '[Opsview - %(asctime)s] %(message)s'

Sample Splunk syslog Results Exporter configuration with log level, facility, and other details set.

Once again, just restart the component and your data will start being exported to Splunk!

Full documentation for using this component can be found at Opsview Knowledge Center - Exporting Results. We've also recently posted a video about the Results Exporter and Splunk integration. Look out for more blogs, coming soon, covering Results Exporter use cases in depth!

Get unified insight into your IT operations with Opsview Monitor

ojenkins's picture
by Owen Jenkins,
Technical Intern
University of Birmingham student studying Computer Science MSci, currently undertaking an internship here at Opsview. In my free time I love baking and doing pole fitness.

More like this

SQL Server Monitoring
Blog
By Opsview Team, Administrator

In this guide, I will show you a quick and easy way to get open source syslog monitoring using Opsview that can also be applied to Nagios.

Monitoring tools in Linux
Blog
By Simon Scott, Python Developer

A breakdown of our Python project structure that provides a "place for everything" and the flexibility to work efficiently both for development...

Windows Monitoring Tools
Blog
By Opsview Team, Administrator

In this post, we’ll look at how we can monitor Solr, what performance metrics we might want to gather and how we can easily achieve this with...