Filebeat - Logstash connection reset by peer

Hi, i am facing problem with filebeat sending logs to logstash. My architecture: filebeat (versions: 5.3.0 and 5.2.2 - tried both) -> logstash-broker (versions: 5.2.2 and 5.3.0) -> redis-cache -> logstash-indexer -> elasticsearch.

Problem is between filebeat and logstash-broker.
Configuration filebeat:

  • /etc/filebeat/filebeat.yml
filebeat.config_dir: /etc/filebeat/conf.d
output.logstash:
  hosts: ["logstash-broker.mydomain:5044"]
  • /etc/filebeat/conf.d/supervisor
filebeat.prospectors:
      - input_type: log
        paths:
          - /var/log/supervisor/*-stdout.log
        fields_under_root: true
        fields:
          type: supervisor
          lsi_name: supervisor
          lsi_type: filebeat
          lsi_port: 5044
          lso_name: NULL
        multiline:
          pattern: '^\s'
          match: before

Configuration logstash-broker:

input {
    port => 5044
}
filter {
}
output {
    redis {
        host => "redis.mydomain"
        batch => true
        batch_events => 5000
        key => "supervisor"
        data_type => "list"
    }
}

In logstash INFO logs there is nothing about that problem and in DEBUG mode i cant tell because of other TCP/UPD inputs - they are working good.
In filebeat log there is:
> 2017-04-10T15:49:50+02:00 ERR Failed to publish events caused by: read tcp filebeat_ip:56088->logstash_ip:5044: read: connection reset by peer

    2017-04-10T15:49:50+02:00 INFO Error publishing events (retrying): read tcp filebeat_ip:56088->logstash_ip:5044: read: connection reset by peer
    2017-04-10T15:49:54+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=332 libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.published_but_not_acked_events=10

Running on latest logstash-input-beats plugin (version: 3.1.14)

I found some similar topics here, but no solutions works (tried to play with: client_inactivity_timeout, pipline workers, timeouts and few more settings on both sides [filebeat, logstash]).

When i restart logstash service some logs are proceed, but after a while i get that connection reset by peer. Logstash instance is pretty good 4CPU, 8GB RAM, 4GB HEAP size.

Any advice?
Thank you

Hi, I'm having similar problems but my logstash configuration is different. When I use "input { tcp { port" my data goes through but I'll get LogStash::Json::ParserError and when I use "input { beats { port" no data goes through and I'll get this "... write: connection reset by peer" error.

You can try to add either tcp or beats to your input config to see if you get any futher.

Hi, i need to use that beats input, but i found that logstash stop accepting messages when he cant handle them (there is 500K messages in 15min), but i thought that logstash and filebeat communicate with each other and slow down message flow when needed, but now it looks like logstash just stop accepting messages and terminating connections.

Other inputs are ok, he just stop accepting all connections from beats input.

hi, I´m only sending few test messages but still getting this "reset by peer" error. I'm trying to get some debug from the logstash end.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.