Filebeat error on processing log file

I'm trying to ingest log from filebeat, parsed through logstash and ingest in Elasticsearch. To make the pipeline work, I didnt add the grok yet but the log file is not moving forward with below error

I'm using 6.x version

2019-05-15T13:20:33.679-0500 INFO log/harvester.go:216 Harvester started for file: /var/log/SDP/events/EventLogFile.txt.0
2019-05-15T13:20:33.725-0500 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp> write: connection reset by peer
2019-05-15T13:20:34.726-0500 ERROR pipeline/output.go:92 Failed to publish events: write tcp> write: connection reset by peer
2019-05-15T13:20:43.620-0500 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":670,"time":675},"total":{"ticks":6380,"time":6391,"value":6380},"user":{"ticks":5710,"time":5716}},"info":{"ephemeral_id":"7baa6f1d-01fd-4c42-b4b9-7867e6cf2083","uptime":{"ms":1740008}},"memstats":{"gc_next":12478016,"memory_alloc":9147128,"memory_total":1016521112,"rss":10252288}},"filebeat":{"events":{"added":43656,"done":43656},"harvester":{"open_files":2,"running":2,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":43655,"batches":23,"failed":2048,"total":45703},"read":{"bytes":132},"write":{"bytes":3462886,"errors":1}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"published":43655,"retry":4096,"total":43656},"queue":{"acked":43655}}},"registrar":{"states":{"current":11,"update":43656},"writes":23},"system":{"load":{"1":1.39,"15":0.67,"5":0.72,"norm":{"1":0.029,"15":0.014,"5":0.015}}}}}}

Looking at the above errors, are you sure that the configuration is valid? Are you connecting to the right host / port?

@pierhugues Yes. I checked that but upon running the logstash-beats updates, it started working fine.
Another question is when I run logstash as a service I dont have to mention the pipeline whereas from standalone i.e. from bin/logstash I have to mention the pipeline can that be an issue? I did an rpm install so my bin is in /usr/share/logstash and the pipeline and .conf is in /etc/logstash

@ankitachow yes, by default when you start Logstash as a service we will just load all the configuration that we can find in the /etc/logstash and merge them in a single pipeline.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.