In this guide, I will show you a quick and easy way to get open source syslog monitoring using Opsview.
You are here
Using Opsview Results Exporter - Filtering Results
Added in Opsview Monitor 6.1, the Results Exporter component makes it easy to export your results out of Opsview Monitor for ingestion into other applications. In this blog post, I’ll be exploring how you can leverage our custom filters to fine-tune your export pathways and remove unwanted messages.
You can check out my previous blog post at Using Opsview Results Exporter - Ready, Set, Splunk! for the basics on setting up the component - I recommend reading this first to get an understanding of the terminology I’ll use below, and particularly in regards to setting up the configuration file.
When developing the new Results Exporter Component, we understood it was crucial to build in a mechanism for filtering results from the outset. When exporting into logs or analytics systems, not only is it difficult to identify important trends or events when your results are polluted with unwanted pieces of data, but it can have a real cost to your business. For example, Splunk Enterprise is based on volume pricing - meaning that the more data you ingest, the more you’re paying. Due to this, it was natural to support filtering at the point of export, ensuring you have complete control over what data leaves your Opsview system.
We’ve tried to ensure this filtering system is easy to use and intuitive, while remaining powerful and comprehensive. Once again (if you’ve read my last blog), we’ll be making use of the configuration YAML file located at /opt/opsview/resultsexporter/etc/resultsexporter.yaml to control the behaviour of the component. Don’t forget you have to restart the component for changes in this file to take effect - just run:
/opt/opsview/watchdog/bin/opsview-monit restart opsview-resultsexporter
How does our filtering system work?
Each output can have a ‘filter’ applied to it - this is a string that describes which messages will be permitted through that output. Any message meeting the conditions set out in the string will be processed and exported.
To apply a filtering string to an output, you simply enter it under the filter key within that output (here using the example of the HTTP output created in the last blog):
|Opsview's Results Exporter lets you define a filter string for any output. Here, we show where to insert such a filter string in an existing Splunk output, configured in the file /opt/opsview/resultsexporter/etc/resultsexporter.yaml|
This ensures that even if you have a multitude of different outputs, you can be sure which messages are being permitted through each one, and modify them all separately.
What filters can I use?
To allow all messages through (this is the default) you can simply use an empty string (or omit the filter entirely), or use the filter string
* to be more explicit in your intention. Alternatively, to prevent any message being allowed through, you can use
Filter strings can also be built using comparisons - e.g. checking if a particular part of the message being considered matches a given value or pattern. We support these comparisons:
|==||is equal to|
|!=||is not equal to|
|>=||is greater than or equal to|
|<=||is less than or equal to|
|!~||does not contain|
|<||is less than|
|>||is greater than|
|!@||does not match (regex)|
As a simple example of a comparison, we can limit our output to only show results from the “Connectivity - LAN” Service Check:
|A simple comparison filter example: only show results from the "Connectivity - LAN" service check.|
Note that we recommend using single quotes to surround the filter, and double quotes for the strings used inside comparisons (as seen above). This way, you won't need to escape backslashes in your regular expressions.
These comparisons can then be mixed and combined through the use of the
&& (logical and) and
|| (logical or) operations. So if we wanted to only export results when our Service Check began to report a non-OK result (for example, to log error events and attempt to identify the cause) we can simply add another condition:
|A more complex filter, isolating messages whose current state is set to 'OK.'|
What if I have lots of outputs, and want to reuse filters?
Using a YAML file for configuration means we can leverage the power of YAML anchors to allow simple reuse of filters. If I have several outputs currently declared for my Results Exporter component as below:
resultsexporter: outputs: file: my_log_file_1: path: /my/file/path1 my_log_file_2: path: /my/file/path2 my_log_file_3: path: /my/file/path3
I might want to apply the same filter to each. For example, the filter string
(hostname == “host1”) will limit results to those relating to the “host1” host. To apply this to each, I can declare it above with the
& (anchor) operator with a name, and then use that name to refer to it in my outputs with the
* (anchor reference) operator. This means if I decide to change the filter in the future, I only have to edit it in one place.
resultsexporter: outputs: filters: &only_host1 '(hostname == "host1")' file: my_log_file_1: filter: *only_host1 path: /my/file/path1 my_log_file_2: filter: *only_host1 path: /my/file/path2 my_log_file_3: filter: *only_host1 path: /my/file/path3
Of course, filtering is only a part of the capability of the Results Exporter. We also allow you to map the fields within the result messages themselves using our mapping system, so look out for a breakdown of that in the next blog!
Full documentation for using this filtering system (and an exhaustive list of all the available fields) can be found at Opsview Knowledge Center - Exporting Results (Filtering).
More like this
If you're a dissatisfied Nagios user who is ready to make the switch to Opsview, here is a guide on how to execute a migration that will result in...
SNMPv3 traps can be quite complicated to configure the first time you go through the process, but when you understand what's going on, everything...