Logstash JVM heap running out of space

Need some advice on how to resolve this issue. I setup an ELK box (1 box) that is working great, except every 4 days or so logstash runs out of JVM heap and then doesn't do much processing and I have to do a restart to fix it again.

See below screenshot of the JVM steadily increasing:

Any advice what I can do to resolve this problem, the ELK instance is setup with 4GB of RAM. I don't think it is a RAM capacity issue as it is fine initially and overtime fills up the heap.

What version of Logstash? What does your configuration look like? Heap exhaustion is frequently the result of certain plugins being used in a particular configuration.

@theuntergeek

Version ---> 5.3.1
pipeline.workers ---> 6
pipeline.batch_delay ---> 5
pipeline.batch_size ---> 125

It is a single box with 4GB of RAM and 2 Cores and it is processing events / sec on average:

Input ---> 128
Filtered ---> 228
Output ---> 228

It consists of the following plugins:

input:

  • beats
  • jdbc
  • http_poller
  • file

filter:

  • clone
  • ruby
  • mutate
  • json
  • grok
  • date
  • useragent
  • geoip

output:

  • elasticsearch
  • graphite
  • statsd

This is still rather incomplete without the actual configuration. My suspicions center on the ruby and clone filters, however. These can abuse memory.

@theuntergeek

I'll do some testing around clone and ruby filters to see if I can find a correlation and post my results. Thanks for the info.

@theuntergeek

Made some progress, well at least ruled out cloning as the problem. Enabled cloning but disabled some processing of the cloned events and it became stable testing for 4 days. See screenshot.

So I must be doing something in the processing of the clones only using the ruby filters, I will focus there and follow up.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.