Logstash high memroy consumption and possibly a leak

We run logstash 6.3.1 with a single pipeline of beats input plugin, json and grok filters, and S3 output plugin. It runs in Docker and is configured with 2GB for Xms and Xmx. The node itself has 4GB total memory. When it starts everything looks good but as time passes, it starts taking more and more memory as much as available on the node. Right now its java process has taken 3.8GB.

Using logstash health api (logstash:9600/_node/stats/jvm) shows heap usage of about 50-60% but we don't get any info on non-heap parts, So, it doesn't seem to be an issue with java heap at this point.

How can we get more info on non-heap memory (java metaspace) when it runs in a container?
Could we limit jvm metasapce by adding -XX:MaxMetaspaceSize to jvm.options file?
What else can we look at or tune?


Java cannot use more than whatever you define as heap. What you are likely seeing is off heap memory, controlled by the OS.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.