Logstash.pipeline error: A plugin had an unrecoverable error

I am setting up a demo for our CEO to review - Kibana dasboard. I have this installed on a local server and have filebeat installed on that elk server to create logs for the Kibana demo. It was working yesterday and I have no idea how I broke it. Below is a piece of my logstash-plain.log log file

[2017-03-02T16:08:08,201][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"ad95fb8dc209078ce919ded5711317eda5c48b1e-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1c3bc15e-a81a-4dd1-8d23-b489985d00ab", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: event executor terminated

Below is my filebeat yml - setup to run on same box as elk server to demo logs

filebeat:
prospectors:
-
paths:
#- /var/log/.log
- /var/log/syslog
- /var/log/apache2/
.log # NEW!
input_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash: # not "elasticsearch"!
hosts: ["localhost:5044"] # change to your hostname, note change of port
shipper:
logging:
files:

this shows listener is running on 5044

netstat -l | grep 5044
tcp6 0 0 [::]:5044 [::]:* LISTEN

---------- piece from filebeat log

2017-03-02T16:17:59-06:00 ERR Failed to publish events caused by: read tcp 127.0.0.1:33882->127.0.0.1:5044: read: connection reset by peer
2017-03-02T16:17:59-06:00 INFO Error publishing events (retrying): read tcp 127.0.0.1:33882->127.0.0.1:5044: read: connection reset by peer
2017-03-02T16:18:22-06:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.write_bytes=235 libbeat.logstash.publish.read_errors=1 libbeat.logstash.published_but_not_acked_events=2038

here is output from elasticsearch for filebeat output
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

-- only shows data from yesterday and not today

{
"took" : 88,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 37867,
"max_score" : 1.0,
"hits" : [
{
"_index" : "filebeat-2017.03.01",

I've never seen this error. Moving to Logstash forum. Maybe someone more experience with Logstash has some idea.

I figured out the issue. The ip in logstash config was my domain name
and the ip in filebeat.yml was localhost.

Thanks

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.