HTTP_Poller send output to elasticsearch


#1

Hey everyone,

for testcases I am requesting some urls and want to send the result to my elasticsearch. this is my config file:

input {
  http_poller {
    urls => {
      "service1" => "http://192.168.99.100:9200/services/external/1"
      "service2" => "http://192.168.99.100:9200/services/external/2"
      "service3" => "http://192.168.99.100:9200/services/external/3"
      "service4" => "http://192.168.99.100:9200/services/external/4"
      "service5" => "http://192.168.99.100:9200/services/external/5"
    }
    automatic_retries => 0
    codec => "json"
    interval => 10
    request_timeout => 8
    metadata_target => http_poller_metadata
    tags => service_healthcheck
  }
}

filter {
  if [http_poller_metadata] {
    mutate {
      add_field => {
        "@name" => "%{http_poller_metadata[name]}"
        "@state" => "%{http_poller_metadata[state]}"
      }
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost"]
  }
  stdout {
    codec => rubydebug
  }
}

And this is what the json behind the url looks like

_index	"services"
_type	"external"
_id	"1"
_version	4
found	true
_source	
doc	
name	"Service-1"
state	"up"

I am getting results in the console, so something works.

Console:

{
                  "@state" => "%{http_poller_metadata[state]}",
                   "@name" => "service1",
                   "found" => true,
              "@timestamp" => 2017-09-15T11:11:06.682Z,
                  "_index" => "services",
                   "_type" => "external",
                "@version" => "1",
    "http_poller_metadata" => {
                 "request" => {
            "method" => "get",
               "url" => "http://192.168.99.100:9200/services/external/1"
        },
        "response_headers" => {
            "transfer-encoding" => "chunked",
                 "content-type" => "application/json; charset=UTF-8"
        },
                    "code" => 200,
        "response_message" => "OK",
           "times_retried" => 0,
         "runtime_seconds" => 0.005,
                    "name" => "service1",
                    "host" => "4bf19441e414"
    },
                 "_source" => {
        "doc" => {
             "name" => "Service-1",
            "state" => "up"
        }
    },
                     "_id" => "1",
                "_version" => 4,
                    "tags" => [
        [0] "service_healthcheck"
    ]
}

But the @name and @state field are not filled with the correct values, and nothing is transfered into elasticsearch (I see no changes in Kibana after starting).

Any suggestions?


#2

Hi Sharivari,

When you look at the console, you can see that both fields "name" and "state" are embedded in "doc" in the _source field. They are not stored in the "metadata_target" field which contains informations about the request/response and not the content of the response.

Could you try to access them in the filter like this:

add_field => {
"@name" => "%{[doc][name]}"
"@state" => "%{[doc][state]}"
}

I think it is not necessary to check if the "http_poller_metadata" field exists, maybe it is more interesting to check if [http_poller_metadata][code] is equal to 200.

Regards
Romain


#3

Thanks a lot Rom1! I hade to change it to

add_field => {
"@name" => "%{[_source][doc][name]}"
"@state" => "%{[_source][doc][state]}"
}

But the problem is, there is still nothing sent to Elasticsearch. I tried to change the output of elasticsearch to this:

elasticsearch {
    hosts => "localhost"
    index => "services-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "*****"
}

Now it creates at least the index, but there are no fields in it.


(VIJAYA K PEDDAREDDY) #4

Hi Sharivari,

I'm exactly in the same boat as you. I can see the index getting created only if there's errors.

Here's my logstash config file:

input {
	http_poller {		
		urls => {			
			"service1" => "https://dc2-svzsys01.xxxx.com:11101/IBMJMXConnectorREST"
			"service2" => "https://dc2-svzsys01:11101/IBMJMXConnectorREST/mbeans"			
			"service3" => "https://dc2-svzsys01.xxxxx.com:11101/IBMJMXConnectorREST/mbeanCount"
		}
		truststore => "/opt/data/conf/downloaded_truststore.jks"
		truststore_password => "******"    
		codec => "json"
		request_timeout => 8
		schedule => { "every" => "3s" }    	
		metadata_target => http_poller_metadata
		tags => jmx_endpoints
	}
}

filter {	
        json {
		source => "message"
	}
        
	if [http_request_failure] or [http_poller_metadata][code] != 200 {		
		mutate {
			add_tag => "bad_request"
		}
	}
       
        if [http_poller_metadata][code] == 200 {		
		mutate {
			add_tag => "good_request"
		}
	}
}

output {	
	elasticsearch {		
		hosts => ["10.19.28.21:9200"]
		index => "g2-jmx-metrics-%{+YYYY.MM.dd}"		
	}

        file {
		path => "/opt/logs/logstash-5.1.2/g2-jmx-metrics-%{+YYYY.MM.dd}.txt"
	}
	
	stdout {
		codec => rubydebug
	}
}

I get logs in Kibana if I shutdown that specific server :

http_poller_metadata.request.url   https://dc2-svzsys01.xxx.com:11101/IBMJMXConnectorREST

message	                                       Error 404: java.io.FileNotFoundException: SRVE0190E: File not found: /IBMJMXConnectorREST

tags	   	                                                _jsonparsefailure, jmx_endpoints, bad_request

Fie Output plugin creating files at given location only if there's error. Otherwise nothing.

Please help me if you found any solution to this issue.

Regards
VJ.


#5

At least I'm not the only one. But unfortunately I still couldn't find out why it doesnt work. Does someone of the pros here have an idea how to solve this?


(VIJAYA K PEDDAREDDY) #6

Hi Sarivari,

Glad you replied.

Can anybody here help us to resolve this issue please ?

Regards
Vj.


(VIJAYA K PEDDAREDDY) #7

Hi @Sharivari , any luck solving this issue ?


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.