Filebeat write:connection reset by peer

Hi everyone!
Here's my problem, when I start Filebeat on my clients I keep having the same error:
ERR Failed to publish events: write tcp client_ip:41352->elk_ip:5044: write: connection reset by peer

Logstash, Elasticsearch, Kibana & Filebeat are all same version 6.1

Here are the configuration files:

cat logstash.yml | egrep -v "(^#.*|^$)" :
    path.data: /var/lib/logstash
    path.config: /etc/logstash/conf.d/*.conf
    path.logs: /var/log/logstash

AND:

cat /etc/logstash/conf.d/* :
    input {
      beats {
        port => 5044
        ssl => true
        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
        # Ferme la connexion inactive au bout d'1 minute au lieu de 15 secondes
        #client_inactivity_timeout => 60
      }
    }
    filter {
      if [type] == "syslog" {
        grok {
          match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        syslog_pri { }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }
    output {
      elasticsearch {
        hosts => ["localhost:9200"]
        sniffing => true
        manage_template => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
      }
    }

AND:

cat filebeat.yml | egrep -v "#" : 
    filebeat.prospectors:

- type: log

  enabled: true

  paths:
    - /var/log/syslog
    - /var/log/auth.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 3

output.logstash:
  hosts: ["elk_ip:5044"]

  ssl:
    certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

I have looked through the topics but haven't found anything to solve my problem... :sweat_smile:, if you need anymore information don't hesitate.
Thanx in advance for all your help!

Can you add the beats input version to this issue? We have fixed a few problems regarding connection reset by peer recently, and I want to make sure you are using the latest beats input. Plugins have an independent release cycle.

You can get the version of the beats input by using the following command:

bin/logstash-plugin list --verbose beats

Thanks

Here:

root@elkserver:/etc/logstash# /usr/share/logstash/bin/logstash-plugin list --verbose beats
logstash-input-beats (5.0.4)
  • I restarted my servers (elk and client).
  • On my client I renamed /etc/filebeat/modules.d/logstash.yml.disabled to logstash.yml and restarted Filebeat service.
  • Renewed ssl certificates

When I run:

root@assurancetourix:/etc/logstash# /usr/share/logstash/bin/logstash  -f /etc/logstash/conf.d/ --path.settings=/etc/logstash -t
2018-01-12 14:03:39,727 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for logger config "root"
2018-01-12 14:03:44,130 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for logger config "root"
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

I now have a new error on client:

root@glpi:/etc/filebeat# tail -f /var/log/filebeat/filebeat
2018-01-12T13:38:58+01:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=8639328 beat.memstats.memory_alloc=4328280 beat.memstats.memory_total=15679176 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=1 libbeat.pipeline.clients=5 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=2
2018-01-12T13:39:28+01:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=8639328 beat.memstats.memory_alloc=4356568 beat.memstats.memory_total=15707464 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=1 libbeat.pipeline.clients=5 libbeat.pipeline.events.active=4117 registrar.states.current=2
2018-01-12T13:39:31+01:00 ERR  Failed to connect: dial tcp 172.16.1.183:5044: getsockopt: connection refused
2018-01-12T13:39:58+01:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=8639328 beat.memstats.memory_alloc=4384344 beat.memstats.memory_total=15735240 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=1 libbeat.pipeline.clients=5 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=2
2018-01-12T13:40:28+01:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=8639328 beat.memstats.memory_alloc=4405416 beat.memstats.memory_total=15756312 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=1 libbeat.pipeline.clients=5 libbeat.pipeline.events.active=4117 registrar.states.current=2
2018-01-12T13:40:31+01:00 ERR  Failed to connect: dial tcp 172.16.1.183:5044: getsockopt: connection refused
2018-01-12T13:40:58+01:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=8639328 beat.memstats.memory_alloc=4331560 beat.memstats.memory_total=15781760 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=1 libbeat.pipeline.clients=5 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=2
2018-01-12T13:41:28+01:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=8639328 beat.memstats.memory_alloc=4353496 beat.memstats.memory_total=15803696 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=1 libbeat.pipeline.clients=5 libbeat.pipeline.events.active=4117 registrar.states.current=2
2018-01-12T13:41:31+01:00 ERR  Failed to connect: dial tcp 172.16.1.183:5044: getsockopt: connection refused
2018-01-12T13:41:58+01:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=8639328 beat.memstats.memory_alloc=4382072 beat.memstats.memory_total=15832272 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=1 libbeat.pipeline.clients=5 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.retry=2048 registrar.states.current=2

AND

root@glpi:/etc/filebeat# telnet 172.16.1.183 5044
Trying 172.16.1.183...
telnet: Unable to connect to remote host: Connection refused

I was hopping to get this running fast but keep tripping on problems between Logstash & Filebeat, grateful for your help ^^

How do I update the plugins? Or check if my plugins are up to date?

It seems ELK server is not listening on port 5044...

 netstat -ntlp | grep LISTEN
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      965/node
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1293/nginx -g daemo
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1149/sshd
tcp6       0      0 ::1:9200                :::*                    LISTEN      1140/java
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      1140/java
tcp6       0      0 ::1:9300                :::*                    LISTEN      1140/java
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      1140/java
tcp6       0      0 :::22                   :::*                    LISTEN      1149/sshd

ufw firewall is active and allows 5044.
How do I now force logstash to listen on port 5044?

Sorry for the delay.

You can update the plugin with bin/logstash-plugin update logstash-input-beats It will tell you if it need to be updated.

@DavidDebray Looking at your configuration its already configured to listen to 5044 because of the bind => 5044 by default we listen to all IPs

Do you see any error when starting up logstash?

Hi!
No errors that I can see but logstash consumes huge amounts of CPU!!!
systemctl stop logstash => cpu ~1%
systemctl start logstash => cpu ~98%

Logstash is not receiving any data yet... seems there is some kind of "loop" in the logstash processes...

OK, let's try to take a step back to get some part workings, the fact that LS is using 100% is making me think maybe there is something wrong on the Logstash side, perhaps in the grok filter.

Can you try the following:

  • Remove any TSL requirements from both configurations.
  • Remove the Filter part of the Logstash config,
  • Start Logstash with the debug log on (--log.level debug)

Do you see any events for those options?

Merci

I did what you asked (disabled tls and grok) and restarted filebeat and logstash services, cpu is back to normal however I do not know how to launch logstash with the:

--log.level debug

In /var/log/logstash/logstash.err I have:
/opt/logstash/bin/logstash: not found

And:

root@assurancetourix:/usr/share/logstash# systemctl status logstash
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
   Active: active (running) since mer. 2018-01-24 14:37:13 CET; 10min ago
 Main PID: 23998 (java)
    Tasks: 48
   Memory: 679.2M
      CPU: 1min 36.738s
   CGroup: /system.slice/logstash.service
           └─23998 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless

janv. 24 14:37:31 assurancetourix logstash[23998]:         at org.jruby.Main.main(Main.java:204)
janv. 24 14:37:31 assurancetourix logstash[23998]: Caused by: java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileM
janv. 24 14:37:31 assurancetourix logstash[23998]:         at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:75)
janv. 24 14:37:31 assurancetourix logstash[23998]:         at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:81)
janv. 24 14:37:31 assurancetourix logstash[23998]:         at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:103)
janv. 24 14:37:31 assurancetourix logstash[23998]:         at org.apache.logging.log4j.core.appender.RollingFileAppender.createAppender(RollingFileAppender.java:191)
janv. 24 14:37:31 assurancetourix logstash[23998]:         ... 86 more
janv. 24 14:37:31 assurancetourix logstash[23998]: 2018-01-24 14:37:31,287 main ERROR Null object returned for RollingFile in Appenders.
janv. 24 14:37:31 assurancetourix logstash[23998]: 2018-01-24 14:37:31,288 main ERROR Null object returned for RollingFile in Appenders.
janv. 24 14:37:31 assurancetourix logstash[23998]: 2018-01-24 14:37:31,294 main ERROR Unable to locate appender "plain_rolling" for logger config "root"
lines 1-20/20 (END)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.