Logstash not outputting Heartbeat input


#1

Logstash 5.5.1
Heartbeat 5.5.2 & tried 6.0.0 beta2
logstash-plugin update logstash-input-beats

Can not get logstash to forward heartbeat data to Elastic. Debug mode shows it receives the data but then does nothing with it. The same Logstash happily forwards application log data and I had no issue integrating Metricbeat on another server using similar config.

Initially thought this was because there was no json template on the Elastic cluster (heartbeat via Logstash doesnt send an inital template) so I set up Heartbeat to directly send to the Elastic index, the logs showed it send the json template and heartbeat data soon appeared successfully in the Kibana view.

However when pointing back to Logstash again nothing is sent to Elastic. Logstash debug logging shows it receives the heartbeat data but then does nothing with it.

Heartbeat YML:

 # Configure monitors
 heartbeat.monitors:
 - type: http
 
   # Monitor name used for job name and document type
   name: "heartbeat http"
   
   # Enable/Disable monitor
   enabled: true
   
   # List or urls to query
   urls: ["https://www.xxx.com/Pages/default.aspx"]
 
   # Configure task schedule
   schedule: '@every 10s'
 
 output.logstash:
   enabled: true
   # The Logstash hosts
   hosts: ["127.0.0.1:5044"]

============== Logstash Shipper:

input 
{

# Input from Heartbeat

    beats
    {
        port => 5044
    }
}

filter 
{

# Add Fields to All Logs


			mutate 
			{
				add_field => 
				{ 
					"BU" => "pgo"
					"env" => "qa"
					"region" => "emea"

				}
			}

}


output 
{
	elasticsearch 
	{
		hosts => "logstash-xxx.com"
		index => "logstash-xxx-%{+YYYY.MM}" 
	}
}

Logstash Debug:

[2017-09-08T17:05:08,548][DEBUG][logstash.pipeline        ] filter received {"event"=>{"tcp"=>{"rtt"=>{"connect"=>{"us"=>146475}}, "port"=>443}, "@timestamp"=>2017-09-08T17:05:05.517Z, "resolve"=>{"rtt"=>{"us"=>508774}, "ip"=>"xx.xx.xx.xx", "host"=>"www.xxx.com"}, "beat"=>{"hostname"=>"xxxxxxxx", "name"=>"xxxxxxxx", "version"=>"6.0.0-beta2"}, "@version"=>"1", "host"=>"xxxxxxxx", "http"=>{"rtt"=>{"response_header"=>{"us"=>2063446}, "total"=>{"us"=>2508730}, "write_request"=>{"us"=>0}, "content"=>{"us"=>0}, "validate"=>{"us"=>2063446}}, "response"=>{"status"=>200}, "url"=>"https://www.xxx.com/Pages/default.aspx"}, "tls"=>{"rtt"=>{"handshake"=>{"us"=>298809}}}, "monitor"=>{"duration"=>{"us"=>3018482}, "scheme"=>"https", "ip"=>"xx.xx.xx.xx", "host"=>"www.xxx.com", "name"=>"heartbeat http", "id"=>"heartbeat http@https://www.xxx.com/Pages/default.aspx", "type"=>"http", "status"=>"up"}, "type"=>"monitor", "tags"=>["beats_input_raw_event"]}}
[2017-09-08T17:05:08,549][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::Mutate: adding value to field {"field"=>"BU", "value"=>["pgo"]}
[2017-09-08T17:05:08,550][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::Mutate: adding value to field {"field"=>"env", "value"=>["qa"]}
[2017-09-08T17:05:08,550][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::Mutate: adding value to field {"field"=>"region", "value"=>["emea"]}
[2017-09-08T17:05:08,552][DEBUG][logstash.pipeline        ] output received {"event"=>{"tcp"=>{"rtt"=>{"connect"=>{"us"=>146475}}, "port"=>443}, "resolve"=>{"rtt"=>{"us"=>508774}, "ip"=>"xx.xx.xx.xx", "host"=>"www.xxx.com"}, "monitor"=>{"duration"=>{"us"=>3018482}, "scheme"=>"https", "ip"=>"xx.xx.xx.xx", "host"=>"www.xxx.com", "name"=>"heartbeat http", "id"=>"heartbeat http@https://www.xxx.com/Pages/default.aspx", "type"=>"http", "status"=>"up"}, "type"=>"monitor", "env"=>"qa", "tags"=>["beats_input_raw_event"], "@timestamp"=>2017-09-08T17:05:05.517Z, "BU"=>"pgo", "beat"=>{"hostname"=>"xxxxxxxx", "name"=>"xxxxxxxx", "version"=>"6.0.0-beta2"}, "@version"=>"1", "host"=>"xxxxxxxx", "http"=>{"rtt"=>{"response_header"=>{"us"=>2063446}, "total"=>{"us"=>2508730}, "write_request"=>{"us"=>0}, "content"=>{"us"=>0}, "validate"=>{"us"=>2063446}}, "response"=>{"status"=>200}, "url"=>"https://www.xxx.com/Pages/default.aspx"}, "tls"=>{"rtt"=>{"handshake"=>{"us"=>298809}}}, "region"=>"emea"}}
[2017-09-08T17:05:11,397][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2017-09-08T17:05:16,398][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2017-09-08T17:05:17,618][DEBUG][logstash.pipeline        ] filter received {"event"=>{"tcp"=>{"rtt"=>{"connect"=>{"us"=>176764}}, "port"=>443}, "@timestamp"=>2017-09-08T17:05:15.517Z, "resolve"=>{"rtt"=>{"us"=>976}, "ip"=>"xx.xx.xx.xx", "host"=>"www.xxx.com"}, "beat"=>{"hostname"=>"xxxxxxxx", "name"=>"xxxxxxxx", "version"=>"6.0.0-beta2"}, "@version"=>"1", "host"=>"xxxxxxxx", "http"=>{"rtt"=>{"response_header"=>{"us"=>1542972}, "total"=>{"us"=>2083971}, "write_request"=>{"us"=>0}, "content"=>{"us"=>0}, "validate"=>{"us"=>1542972}}, "response"=>{"status"=>200}, "url"=>"https://www.xxx.com/Pages/default.aspx"}, "tls"=>{"rtt"=>{"handshake"=>{"us"=>364234}}}, "monitor"=>{"duration"=>{"us"=>2084947}, "scheme"=>"https", "ip"=>"xx.xx.xx.xx", "host"=>"www.xxx.com", "name"=>"heartbeat http", "id"=>"heartbeat http@https://www.xxx.com/Pages/default.aspx", "type"=>"http",

(Steffen Siering) #2

When you configured heartbeat to send to elasticsearch, which index did you use? I wonder if there is a mapping error (conflict on field name and type). Normally we ask users to use a different index per beat type via %{[@metadata][beat]}-%{+yyyy.MM.dd}.

No idea why there is no error in logstash, but have you check the Elasticsearch logs for mapping errors?


#3

Hi both the Logstash and Elastic settings in the heartbeat output section point to the same already-existing index (which already had application log data in it collected by Logstash previously).

The problem seems to have been a combination of originally missing the add_fields which include the variable "BU" required for the index name plus a subsequent switch to Heartbeat beta 6 version during the initial troubleshooting.

Heartbeart v5.5.2 was outputting direct to Elastic successfully initially to create the json template, but when pointed to Logstash, Heartbeat data was no longer visible in Kibana (application logs continued to appear) due to the fact the "BU" variable was not being added to the Heartbeat input as the elastic index "logstash-xxxx-%{BU}-m-%{+YYYY.MM}" was not valid, I assume it just caused an error on Elastic and was never created.

Heartbeat v6 beta was used subsequently and at that point I noticed the missing BU variable problem which was rectified, however now using Heartbeat v6 there was still no Logtstash Heartbeat output via Logstash. When pointing Heartbeat v6 beta direct to Elastic (not been attempted before) I see an error,

2017-09-11T10:36:22Z INFO Setup Beat: heartbeat; Version: 6.0.0-beta2
2017-09-11T10:36:22Z CRIT Exiting: setup.template.name and setup.template.pattern have to be set if index name is modified.

In my view the index name has been consistent throughout the testing of different versions of Heartbeat apart from the missing variable problem.

I switched back to Heartbeat v5.5.2 and I am getting Heartbeat data now visible in the same index as the application logs whether sending direct to Elastic or via Logstash.


#4

Looks like there has been something introduced in Heartbeat v6 that introduces a mapping conflict.

When I direct heartbeat v6 at the same index that Logstash uses I see a mapping error, but heartbeat v5.x works fine:

2017-10-05T12:21:29Z WARN Can not index event (status=400): {"type":"illegal_argument_exception","reason":"[monitor] is defined as an object in mapping [doc] but this name is already used for a field in other types"}

"beat": {
"name": "heartbeat",
"hostname": "xxxxxxxxx",
"version": "6.0.0-rc1"
},
"monitor": {
"host": "www.xxxxxxxx",
"status": "up",
"scheme": "https",
"name": "heartbeat-http",
"type": "http",
"ip": "xxxxxxxx",
"duration": {
"us": 27347
},
"id": "heartbeat-http@https://www.xxxxxxxxxxxxx"
},
"type": "monitor",
"tcp": {
"rtt": {
"connect": {
"us": 1953
}
},
"port": 443
},
"tls": {
"rtt": {
"handshake": {
"us": 11720
}
}

So might have to create a separate index going forward as recommended.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.