Hi, i am facing problem with filebeat sending logs to logstash. My architecture: filebeat (versions: 5.3.0 and 5.2.2 - tried both) -> logstash-broker (versions: 5.2.2 and 5.3.0) -> redis-cache -> logstash-indexer -> elasticsearch.
Problem is between filebeat and logstash-broker.
Configuration filebeat:
- /etc/filebeat/filebeat.yml
filebeat.config_dir: /etc/filebeat/conf.d output.logstash: hosts: ["logstash-broker.mydomain:5044"]
- /etc/filebeat/conf.d/supervisor
filebeat.prospectors: - input_type: log paths: - /var/log/supervisor/*-stdout.log fields_under_root: true fields: type: supervisor lsi_name: supervisor lsi_type: filebeat lsi_port: 5044 lso_name: NULL multiline: pattern: '^\s' match: before
Configuration logstash-broker:
input { port => 5044 } filter { } output { redis { host => "redis.mydomain" batch => true batch_events => 5000 key => "supervisor" data_type => "list" } }
In logstash INFO logs there is nothing about that problem and in DEBUG mode i cant tell because of other TCP/UPD inputs - they are working good.
In filebeat log there is:
> 2017-04-10T15:49:50+02:00 ERR Failed to publish events caused by: read tcp filebeat_ip:56088->logstash_ip:5044: read: connection reset by peer
2017-04-10T15:49:50+02:00 INFO Error publishing events (retrying): read tcp filebeat_ip:56088->logstash_ip:5044: read: connection reset by peer 2017-04-10T15:49:54+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=332 libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.published_but_not_acked_events=10
Running on latest logstash-input-beats plugin (version: 3.1.14)
I found some similar topics here, but no solutions works (tried to play with: client_inactivity_timeout, pipline workers, timeouts and few more settings on both sides [filebeat, logstash]).
When i restart logstash service some logs are proceed, but after a while i get that connection reset by peer. Logstash instance is pretty good 4CPU, 8GB RAM, 4GB HEAP size.
Any advice?
Thank you