I'm encountering similar errors on logstash 5.0.0-alpha5 (things worked fine on v2.4). I truncated the error to comply with the 5,000 character limit on this forum.
{
"timestamp": "2016-11-01T11:33:55.204000+0000",
"message": "Pipeline aborted due to error",
"exception": {
"cause": null,
@robrant thanks for posting that link, it helped a little. I updated my configuration according to the advice given on StackOverflow, but unfortunately I'm still getting errors.
So, I've overcome that error. I've completely changed my config (which I'll post) and while that didn't explicitly fix the ruby error I wasn't going to be able to progress beyond it without fixing my config. The error (at least in my case) was caused by this versioning issue:
which links to this PR:
So... in the short term while this change hasn't made a release, here's what i did. @Ryan_Grannell - this might work for you if your filebeat and log stash config is already in good order.
Check my version of the logstash beat plugin.
$> sudo updatedb
$> locate bin/logstash-plugin
$> cd /usr/share/logstash/ # or wherever yours is installed
$> bin/logstash-plugin install --verbose | grep beat
Mine confirmed that I wasn't on 3.1.4 but the beta release. So, I installed the 3.1.4 version of the plugin.
I think I had some weirdness where it continued to error after this install, but it was late so it might have just been me. Run the bin/logstash-plugin install --verbose command again just to make sure you've now got the right version.
I've now got an EOF error in the filbert log, so I'll be kicking around discuss.elastic.co for a while yet...
@robrant Thanks a lot! That worked, both filebeat and logstash started correctly.
I ran into the same EOF error as you, this github issue seems to track the problem.
I imagine myproblem is related to
2016/11/08 16:20:45.273578 single.go:77: INFO Error publishing events (retrying): read tcp 10.0.0.4:32864->****************:***************: read: connection reset by peer
in the output log
2016/11/08 16:20:42.659130 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/11/08 16:20:42.659794 logstash.go:106: INFO Max Retries set to: 3
2016/11/08 16:20:42.760011 outputs.go:126: INFO Activated logstash as output plugin.
2016/11/08 16:20:42.760080 publish.go:288: INFO Publisher name: *****************-ops-vm-logstash-jaw-0
2016/11/08 16:20:42.760276 async.go:78: INFO Flush Interval set to: 1s
2016/11/08 16:20:42.760302 async.go:84: INFO Max Bulk Size set to: 2048
2016/11/08 16:20:42.760337 beat.go:168: INFO Init Beat: filebeat; Version: 1.3.1
2016/11/08 16:20:42.760688 beat.go:194: INFO filebeat sucessfully setup. Start running.
2016/11/08 16:20:42.760731 registrar.go:68: INFO Registry file set to: /etc/filebeat/.filebeat
2016/11/08 16:20:42.760846 prospector.go:133: INFO Set ignore_older duration to 0s
2016/11/08 16:20:42.760869 prospector.go:133: INFO Set close_older duration to 1h0m0s
2016/11/08 16:20:42.760924 prospector.go:133: INFO Set scan_frequency duration to 10s
2016/11/08 16:20:42.760969 prospector.go:90: INFO Invalid input type set:
2016/11/08 16:20:42.761016 prospector.go:93: INFO Input type set to: log
2016/11/08 16:20:42.761073 prospector.go:133: INFO Set backoff duration to 1s
2016/11/08 16:20:42.761123 prospector.go:133: INFO Set max_backoff duration to 10s
2016/11/08 16:20:42.761176 prospector.go:113: INFO force_close_file is disabled
2016/11/08 16:20:42.761259 prospector.go:143: INFO Starting prospector of type: log
2016/11/08 16:20:42.761412 log.go:115: INFO Harvester started for file: /var/log/logstash/logstash.log
2016/11/08 16:20:42.761675 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/11/08 16:20:42.762183 log.go:115: INFO Harvester started for file: /var/log/auth.log
2016/11/08 16:20:42.762313 crawler.go:78: INFO All prospectors initialised with 3 states to persist
2016/11/08 16:20:42.762355 registrar.go:87: INFO Starting Registrar
2016/11/08 16:20:42.762420 publish.go:88: INFO Start sending events to output
2016/11/08 16:20:42.763934 log.go:115: INFO Harvester started for file: /var/log/*****************/deployment.log
2016/11/08 16:20:45.273578 single.go:77: INFO Error publishing events (retrying): read tcp 10.0.0.4:32864->****************:***************: read: connection reset by peer
2016/11/08 16:20:45.273609 single.go:154: INFO send fail
2016/11/08 16:20:45.273619 single.go:161: INFO backoff retry: 1s
2016/11/08 16:20:46.382501 single.go:77: INFO Error publishing events (retrying): EOF
2016/11/08 16:20:46.382532 single.go:154: INFO send fail
2016/11/08 16:20:46.382632 single.go:161: INFO backoff retry: 2s
2016/11/08 16:20:48.490938 single.go:77: INFO Error publishing events (retrying): EOF
2016/11/08 16:20:48.490988 single.go:154: INFO send fail
2016/11/08 16:20:48.490999 single.go:161: INFO backoff retry: 4s
2016/11/08 16:20:52.557965 single.go:77: INFO Error publishing events (retrying): EOF
2016/11/08 16:20:52.558007 single.go:154: INFO send fail
2016/11/08 16:20:52.558019 single.go:161: INFO backoff retry: 8s
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.