ELK stack file beat configuration issue single.go:77: INFO Error publishing events (retrying): EOF

I have configured an ELK stack in my local environment by following up the below document (link to the document)

As per the documentation and per my understanding I have successfully completed the implementation. However, file beat client is not publishing data to the elasticSearch

filebeat -c /etc/filebeat/filebeat.yml -e -v

2018/04/25 12:02:14.609792 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2018/04/25 12:02:14.610560 logstash.go:106: INFO Max Retries set to: 3
2018/04/25 12:02:14.611332 outputs.go:126: INFO Activated logstash as output plugin.
2018/04/25 12:02:14.612172 publish.go:288: INFO Publisher name: localhost
2018/04/25 12:02:14.613198 async.go:78: INFO Flush Interval set to: 1s
2018/04/25 12:02:14.613502 async.go:84: INFO Max Bulk Size set to: 1
2018/04/25 12:02:14.614102 beat.go:168: INFO Init Beat: filebeat; Version: 1.3.1
2018/04/25 12:02:14.615784 beat.go:194: INFO filebeat sucessfully setup. Start running.
2018/04/25 12:02:14.615835 registrar.go:68: INFO Registry file set to: /var/lib/filebeat/registry
2018/04/25 12:02:14.615859 registrar.go:80: INFO Loading registrar data from /var/lib/filebeat/registry
2018/04/25 12:02:14.616068 prospector.go:133: INFO Set ignore_older duration to 0s
2018/04/25 12:02:14.616082 prospector.go:133: INFO Set close_older duration to 1h0m0s
2018/04/25 12:02:14.616091 prospector.go:133: INFO Set scan_frequency duration to 10s
2018/04/25 12:02:14.616100 prospector.go:93: INFO Input type set to: log
2018/04/25 12:02:14.616109 prospector.go:133: INFO Set backoff duration to 1s
2018/04/25 12:02:14.616118 prospector.go:133: INFO Set max_backoff duration to 10s
2018/04/25 12:02:14.616124 prospector.go:113: INFO force_close_file is disabled
2018/04/25 12:02:14.616139 prospector.go:143: INFO Starting prospector of type: log
2018/04/25 12:02:14.619449 log.go:115: INFO Harvester started for file: /var/log/auth.log
2018/04/25 12:02:14.619690 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2018/04/25 12:02:14.620954 log.go:115: INFO Harvester started for file: /var/log/syslog
2018/04/25 12:02:14.623061 crawler.go:78: INFO All prospectors initialised with 7 states to persist
2018/04/25 12:02:14.623102 registrar.go:87: INFO Starting Registrar
2018/04/25 12:02:14.623140 publish.go:88: INFO Start sending events to output
2018/04/25 12:02:14.795533 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:02:14.795847 single.go:154: INFO send fail
2018/04/25 12:02:14.795946 single.go:161: INFO backoff retry: 1s
2018/04/25 12:02:15.807622 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:02:15.808011 single.go:154: INFO send fail
2018/04/25 12:02:15.808087 single.go:161: INFO backoff retry: 2s
2018/04/25 12:02:17.824012 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:02:17.824411 single.go:154: INFO send fail
2018/04/25 12:02:17.824480 single.go:161: INFO backoff retry: 4s
2018/04/25 12:02:21.829173 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:02:21.829524 single.go:154: INFO send fail
2018/04/25 12:02:21.829715 single.go:161: INFO backoff retry: 8s
2018/04/25 12:02:29.836255 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:02:29.836625 single.go:154: INFO send fail
2018/04/25 12:02:29.836691 single.go:161: INFO backoff retry: 16s
2018/04/25 12:02:45.845455 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:02:45.845504 single.go:154: INFO send fail
2018/04/25 12:02:45.845526 single.go:161: INFO backoff retry: 32s
2018/04/25 12:03:17.852987 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:03:17.853040 single.go:154: INFO send fail
2018/04/25 12:03:17.853056 single.go:161: INFO backoff retry: 1m0s
2018/04/25 12:04:17.857764 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:04:17.857808 single.go:154: INFO send fail
2018/04/25 12:04:17.857824 single.go:161: INFO backoff retry: 1m0s
2018/04/25 12:05:17.882418 single.go:77: INFO Error publishing events (retrying): EOF
2018/04/25 12:05:17.882437 single.go:154: INFO send fail
2018/04/25 12:05:17.882445 single.go:161: INFO backoff retry

Because of the above issue logs are not publishing to the Elasticsearch. :frowning:

It seems that you have installed a version of Filebeat that's very old (version 1.3.1 was released Sept. 15, 2016). The current version is Filebeat 6.2.4.

I highly recommend that you follow the official Filebeat Getting Started Guide using the latest version of Filebeat and the Elastic Stack.

Another option for trying out the whole Elastic Stack inside of Docker is elastic/stack-docker. Just clone the repo and run docker-compose up.

Thanks, @andrewkroh ! Once we updated the file beat to the said version. This worked like charm! :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.