Filebeat is not sending logs to logstash getting A plugin had an unrecoverable error

i am getting below error in logstash logs , Could you please help me to understand why i am getting this error

[2017-05-29T12:45:12,250][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Beats port=>5044, ssl=>true, ssl_certificate=>"/opt/bitnami/logstash/ssl/logstash-remote.crt1",[2017-05-29T12:45:18,266][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Beats port=>5044, ssl=>true, ssl_certificate=>"/opt/bitnami/logstash/ssl/logstash-remote.crt1", ssl_key=>"/opt/bitnami/logstash/ssl/logstash-remote.key", id=>"a7a87cc40298b03d988d0ddd91f714277a95bb19-6", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_fe9a89ad-fdec-4af4-b50e-42182159c696", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
  Error: event executor terminated

filebeat.yml

filebeat:
    prospectors:
      -
        paths:
          - /opt/wildfly/standalone/log/*.log
          - /var/log/syslog
          #  - /var/log/*.log
        document_type: syslog
  output:
    logstash:
      hosts: ["xxxxxxxxxxxx:5044"]
      bulk_max_size: 1024
logging.level: warning
logging.to_files: true
logging.to_syslog: false
logging.files:
  path: /var/log/mybeat
  name: mybeat.log
  keepfiles: 7
      tls:
        certificate_authorities: ["/home/ec2-user/logstash-remote.crt1"]

access-log.conf

input {
     file {
         path => "/opt/bitnami/apache2/logs/access_log"
         start_position => beginning
     }
  beats {
      port => 5044
      ssl => true
      ssl_certificate => "/opt/bitnami/logstash/ssl/logstash-remote.crt1"
      ssl_key => "/opt/bitnami/logstash/ssl/logstash-remote.key"
    }

 }

 filter {
     grok {
         match => { "message" => "%{COMBINEDAPACHELOG}" }
     }
     date {
         match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
 }

 output {
     elasticsearch {
         hosts => [ "127.0.0.1:9200" ]
     }
 }

Note that I edited your post to use three ` for formatting.

Not explaining the error you get on the LS side, but the Filebeat config might have the output section indented too much. It should be toplevel (no indentation). Also the TLS configuration needs to be under logstash, not at the end.

    prospectors:
      -
        paths:
          - /opt/wildfly/standalone/log/*.log
          - /var/log/syslog
          #  - /var/log/*.log
        document_type: syslog
output:
  logstash:
    hosts: ["xxxxxxxxxxxx:5044"]
    bulk_max_size: 1024
    tls:
      certificate_authorities: ["/home/ec2-user/logstash-remote.crt1"]

logging.level: warning
logging.to_files: true
logging.to_syslog: false
logging.files:
  path: /var/log/mybeat
  name: mybeat.log
  keepfiles: 7

For the LS error, it might be worth asking in the Logstash forums.

For me logs also not generating in below location
logging.files:
path: /var/log/mybeat
name: mybeat.log

Any problem in the configuration??

That looks good at first sight, but you have warning level, so maybe there are no messages. Try making that debug for a short while to check.

1 Like

That worked :slight_smile:

But now i am getting belwo error in my log :frowning:

2017-05-31T09:11:40Z DBG Check file for harvesting: /home/ec2-user/access_log
2017-05-31T09:11:40Z DBG Update existing file for harvesting: /home/ec2-user/access_log, offset: 42734
2017-05-31T09:11:40Z DBG File didn't change: /home/ec2-user/access_log
2017-05-31T09:11:40Z DBG Prospector states cleaned up. Before: 1, After: 1
2017-05-31T09:11:40Z DBG Flushing spooler because of timeout. Events flushed: 0
2017-05-31T09:11:45Z DBG Flushing spooler because of timeout. Events flushed: 0
2017-05-31T09:11:49Z DBG connect
2017-05-31T09:11:49Z DBG Try to publish 377 events to logstash with window size 1
2017-05-31T09:11:49Z DBG handle error: read tcp 172.31.7.247:60988->52.60.189.106:5044: read: connection reset by peer
2017-05-31T09:11:49Z DBG 0 events out of 377 events sent to logstash. Continue sending
2017-05-31T09:11:49Z DBG close connection
2017-05-31T09:11:49Z DBG closing
2017-05-31T09:11:49Z ERR Failed to publish events caused by: read tcp 172.31.7.247:60988->52.60.189.106:5044: read: connection reset by peer
2017-05-31T09:11:49Z INFO Error publishing events (retrying): read tcp 172.31.7.247:60988->52.60.189.106:5044: read: connection reset by peer

That looks like an SSL error. Did you move the tls section under logstash? Try also to debug with openssl: openssl s_client -connect logstash:5043 -showcerts

1 Like

Thanks you
changed tls to ssl that resolved the issue :slight_smile:
tls:
certificate_authorities: ["/home/ec2-user/logstash-remote.crt1"]

ssl:
certificate_authorities: ["/home/ec2-user/logstash-remote.crt1"]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.