The timeout occurs when waiting for the ACK signal from logstash. The default timeout is 60 seconds. If logstash is actively processing a batch of events, it sends a ACK signal every 5 seconds.
Reasons why this signal is not received by filebeat can be either network or contention in logstash (induced by additional back-pressure on outputs).
This is the different error showing "closed network connection". Could you please telnet on port 5044. i think filebeat is not able to connect with 5044 port.
Logstash could be resetting the connection due to inactivity, in which case this shouldn't be a problem. You can try increasing the client_inactivity_timeout 842.
But in logs, it looks like it's happening when sending. This could mean there is something blocking the Logstash pipeline.
Does the problem occur if you output only to stdout or to a file from Logstash?
Unfortunately i'm not able to reproduce the same error on my machine each and everything is working fine.
could you please comment your filter plugins and restart your service this is for trail because logstash is not receiving the data there is some blocking in logstash pipeline so please try with this and let me know if it works.
Still the same error, I deleted the filter section for that particular log in logstash and restated.
Don't know what's wrong. I have filebeat runner on other nodes and that is working fine but not on this node where my nginx is running.
018-04-18T10:40:09.809Z DEBUG [logstash] logstash/async.go:142 1024 events out of 1024 events sent to logstash host 172.31.1.52:5044. Continue sending
2018-04-18T10:40:09.809Z DEBUG [logstash] logstash/async.go:99 close connection
2018-04-18T10:40:09.809Z DEBUG [logstash] logstash/async.go:99 close connection
2018-04-18T10:40:09.809Z ERROR logstash/async.go:235 Failed to publish events caused by: write tcp 172.31.11.2:49364->172.31.1.52:5044: use of closed network connection
2018-04-18T10:40:10.809Z ERROR pipeline/output.go:92 Failed to publish events: write tcp 172.31.11.2:49364->172.31.1.52:5044: use of closed network connection
2018-04-18T10:40:10.809Z DEBUG [logstash] logstash/async.go:94 connect
2018-04-18T10:40:10.819Z DEBUG [logstash] logstash/async.go:142 1024 events out of 1024 events sent to logstash host 172.31.1.52:5044. Continue sending
2018-04-18T10:40:18.719Z DEBUG [prospector] prospector/prospector.go:124 Run prospector
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:147 Start next scan
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:361 Check file for harvesting: /dcos/volume1/carbook-test-nginx/logs/access.log
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:447 Update existing file for harvesting: /dcos/volume1/carbook-test-nginx/logs/access.log, offset: 593470
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:499 Harvester for file is still running: /dcos/volume1/carbook-test-nginx/logs/access.log
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:361 Check file for harvesting: /dcos/volume1/carbook-test-nginx/logs/error.log
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:447 Update existing file for harvesting: /dcos/volume1/carbook-test-nginx/logs/error.log, offset: 752242
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:499 Harvester for file is still running: /dcos/volume1/carbook-test-nginx/logs/error.log
2018-04-18T10:40:18.719Z DEBUG [prospector] log/prospector.go:168 Prospector states cleaned up. Before: 2, After: 2
Issue got resolved after deep troubleshooting. Actually I am running my logstash and elastic cluster as docker in overlay network,Also the nginx node where the filebeat is running and the logstash are in different vlans. Still I didn't find where the packet is getting dropped. But after running my logstash container in bridge mode, it started working.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.