Logstash constantly losing connection to Elastic

I've been doing parsing with Logstash on my new machine in the exact same way as I did in the past, but Logstash is constantly losing connection to Elastic this time ("Attempted to send bulk request but there are no living connections in the connection pool"). It fixes itself after waiting maybe 5-10 minutes, but this happens every now and then and is probably massively slowing down the parsing speed.

What has changed:

  1. Elastic Stack upgraded from 6.24 -> 6.30. I heard that one change is the timeout setting changed, but I am not sure why/how this will affect my configuration since I haven't changed much of anything else.
  2. Memory settings for Elastic, as I now give it 31GB ram (up from 8GB).

Any ideas?

logstash-error

Any ideas? This is happening all the time and is a problem on Logstash (Elastic not showing any errors). My logstash is simply parsing logs on the local machine (Elastic on local machine too) so I have no clue what is wrong.

Just FYI for anyone who faces the same problem as me:

I no longer face this issue after modifying 2 things, not sure which solved it:

  1. Moved Elastic to SSD. If this was the issue, possibly Elastic couldn't keep up with Logstash even though logs did not indicate this was the case.
  2. I made some changes to my logstash parser format. My mutate and date previously were always on whether or not grok parsing passed. Now, my mutate and date are only on if grokparsefailure is not in tags. If this was the issue, it means either my mutate (lowercase, remove_field, add_field) or date (match) was somehow messing things up when there was a grok parse failure.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.