Splunk API monitor - Logstash pipeline question?

Hi Folks,

I built a logstash pipeline to monitor splunk via the splunk API by doing the following.

  1. Build multi-file logstash pipeline like such

100-input-exec.conf

input {
            exec {
              id => "curl https://splunkmasternode:8089/services/cluster/master/info"
              command => "curl -X GET 'https://splunkmasternode:8089/services/cluster/master/info?output_mode=json' --insecure -u <insertausername>:<insertapasswordhere> > /tmp/logstash/searchheads/event_master;curl -X GET 'https://splunkmasternode:8089/services/cluster/master/searchheads?output_mode=json' --insecure -u <insertausername>:<insertapasswordhere > /tmp/logstash/searchheads/event_searchhead; python /etc/logstash/conf.d/api-monitors/mergejson.py"
              tags => ["nosavelogs","splunk-searchhead-monitoring"]
              add_field => { "client-service" => "my-splunk" }
              interval => 60
              codec => json
            }
           exec {
              id => "curl https://splunkmasternode:8089/servicesNS/-/-/search/distributed/peers"
              command => "curl -X GET 'https://splunkmasternode:8089/services/cluster/master/info?output_mode=json' --insecure -u <insertausername>:<insertapasswordhere> > /tmp/logstash/peers/event_master ; curl -X GET 'https://splunkmasternode:8089/servicesNS/-/-/search/distributed/peers?output_mode=json' --insecure -u <insertausername>:<insertapasswordhere> > /tmp/logstash/peers/event_peers; python /etc/logstash/conf.d/api-monitors/mergejson_peers.py"
              tags => ["nosavelogs","splunk-cluster-monitoring"]
              add_field => { "client-service" => "my-splunk" }
              interval => 60
              codec => json
            }

        }

400-filter-client-service-json.conf

    filter {
            if [client-service] == "my-splunk" {

         urldecode {
            all_fields => true
          }
        #split into multiple records for every "entry" logstash deal well with arrays in json
        split {
                   field => "entry"
                }
        #remove this field as it has a password in it
                mutate {
                remove_field => [ "command" ]
                    }
 }
}

800-output-conditional.conf

output {
                elasticsearch
                {
                 id => "api-monitors"
                 hosts => ["myelastichost01:9200"]
                 codec => "json_lines"
                 index => "api-monitor-%{+YYYY.MM.dd}"
                }
}

you will need to pip install jsonmerge

mergejson.py

#!/usr/bin/python

from jsonmerge import merge
import json
import os, errno
path = '/tmp/logstash/searchheads/'
x = 0
my_objects = []

try:
    os.makedirs(path)
except OSError as e:
    if e.errno != errno.EEXIST:
        raise

for filename in os.listdir(path):
    with open(path+filename) as json_file:
        my_objects.append( json.load(json_file) )
for obj in my_objects[1]["entry"]:
    my_objects[1]["entry"][x] = merge(my_objects[0]["entry"][0], my_objects[1]["entry"][x])
    x += 1

print json.dumps(my_objects)

and

mergejson_peers.py

#!/usr/bin/python

from jsonmerge import merge
import json
import os, errno
path = '/tmp/logstash/peers/'
x = 0
my_objects = []

try:
    os.makedirs(path)
except OSError as e:
    if e.errno != errno.EEXIST:
        raise

for filename in os.listdir(path):
    with open(path+filename) as json_file:
        my_objects.append( json.load(json_file) )
for obj in my_objects[1]["entry"]:
    my_objects[1]["entry"][x] = merge(my_objects[0]["entry"][0], my_objects[1]["entry"][x])
    x += 1

print json.dumps(my_objects)

This gives me nice API data from searchheads and distributed peer nodes merged with the master node API data. You can then perform alerting on the data etc...

  1. I know I can run this on another logstash host and then effectively i will have double the data in the elasticsearch index to give me a kind of HA monitoring of the splunk API. However, I don't want to do this if i can avoid it. I would like to have a way of a "standby" type pipeline on another logstash host/s? What are my options?
2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.