Every service is running in a separated machine, so ela01 has Elasticsearch, log has Logstash and filebeat… you know, Filebeat service
The problem I'm facing is that Filbeat gives below error to me: 2015-12-16T12:21:58+01:00 INFO backoff retry: 4s 2015-12-16T12:22:02+01:00 INFO Error publishing events (retrying): EOF 2015-12-16T12:22:02+01:00 INFO Error publishing events (retrying): read tcp 192.168.28.162:51149->192.168.28.163:5044: read: connection reset by peer 2015-12-16T12:22:02+01:00 INFO send fail
In case this is not intentional, I noticed that the configuration has both an elasticsearch output and a logstash output configured.
If you are intending for events to go from Filebeat -> Logstash -> Elasticsearch, then you can remove the elasticsearch section of the configuration and only send events to Logstash. An elasticsearch output will need to be added to you logstash config. There is an example in the Getting Started.
I am having kind of similar issue and I was going through this thread and I noticed your ''Getting Started" link (which might help me too) doesn't work.
I am getting a similar kind of error. I am trying to monitor logs from different hosts using filebeats
I get this error on some hosts
2016/10/26 17:28:48.159067 single.go:140: ERR Connecting error publishing events (retrying): read tcp 10.0.1.151:41256->54.214.224.161:5044: i/o timeout
2016/10/26 17:29:17.922310 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.write_bytes=132 libbeat.logstash.publish.read_errors=1
This is happening on some hosts, while I have other hosts which have filebeats running and they are pushing logs to logstash
I have already checked connectivity and that is fine.
my filebeat.yml is as follows
filebeat.prospectors:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.