Logstash losing ES connection periodically

I have been trying to narrow down what could be causing my Logstash to periodically not get a connection to my elasticsearch (Separate Machines). I keep on getting the following but there is no set time between the errors

[2017-04-28T14:37:58,513][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::Elasticsearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
[2017-04-28T14:37:58,514][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::Elasticsearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}

No errors on the ES logs at all. No spikes in any resources at all, I have monitored I/O read and write, load, CPU utilization and memory. I installed X-Pack on all and only on the Elasticsearch graphs am I seeing gaps in receiving no data at all. Nothing spikes before not being able to receive data either.

Eventually the ES server becomes unresponsive and I have to reboot the server. I know it has to do something with it receiving logs from logstash (at least I think so). Can you guys provide other ways to test and find out what it may be? I have also been tweaking the pipeline settings thinking it was that but nothing is working. Let me know if any other configurations are needed to assist.

P.S. I apologize if this is not enough information or if I need clarification later. :grinning:

1 Like

This sounds like the issue is that ES is crashing, not that Logstash has an issue. Does that sound correct?

Have you looked at the ES logs? Has anything popped up there?

I have checked the ES logs but no errors are showing. If I stop the logstash service, then it never becomes unresponsive.

How would I tweak the settings for how many logs are sent from logstash to elasticsearch? I have already configured the pipeline.workers and pipeline.output.workers to see if it will help in this situation but I am poking in the dark here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.