I'm concerned about dataloss, because i get the following error quite often:
2017-02-15T23:58:50Z ERR Failed to publish events caused by: EOF
Typically the files that I'm pushing to Logstash only get written to for 5 minutes, then are moved to S3 after 30 minutes to an hour. I'm not sure if it's a configuration issue, or just noise.
Any suggestions for my config would be much appreciated.
I'm running these as SysV services on AWS Linux:
Filebeat - 5.2.1
Logstash - 2.4.1 (Need the Kinesis Output, which is why I'm still on this version)
which logsthas-input-beats plugin version have you installed? EOF (end of file) happens if connection is closed by remote host (logstash host). Have you checked logstash logs? Upon EOF, filebeat reconnects and continues sending. As long as you still get your logs in time, it's not too critical.
Only reasons you can loose records is due to writing logs much faster then logstash/filebeat can consumer logs and deleting not unprocessed files or logstash being killed with events in pipeline already ACKed to filebeat. Better open another topic.
In case events are dropped, you should find out if they are dropped between FB and LS or LS and Kinesis. If it is to Kinesis, probably best open a question the LS forum.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.