Logstash memory use keeps increasing linearly over time

I have a logstash pipelines that runs tasks every day on a fixed time using the http input.

Each task completes executing (from input of first pipeline to outputs of the last one)
within few minutes.

Rest of the day it's supposedly idle.

The logstash instance is deployed on kubernetes pod.
When I monitor the pod's memory usage it's forever increasing since the initialization.

Over the span of 2 weeks the memory use has been linearly growing -- almost in a straight line-- from 750MB to 1.8GB.

The input plugins in use are:

  • http_poller
  • dead_letter_queue
  • jdbc

The filters:

  • ruby
  • elasticsearch
  • mutate
  • split
  • fingerprint
  • date
  • translate
  • prune
  • http

The outputs:

  • s3
  • amazon_es
  • jdbc

On my jvm.options file heap size looks like this:

-Xms1g
-Xmx1g

On my pod the env LS_JAVA_OPTS is set:

-Xms2g
-Xmx2g 

This is my jvm status from node stats API

"mem" : {
      "heap_used_percent" : 47,
      "heap_committed_in_bytes" : 2138767360,
      "heap_max_in_bytes" : 2138767360,
      "heap_used_in_bytes" : 1010755840,
      "non_heap_used_in_bytes" : 217426408,
      "non_heap_committed_in_bytes" : 257306624,
      "pools" : {
        "survivor" : {
          "peak_used_in_bytes" : 8716288,
          "committed_in_bytes" : 8716288,
          "peak_max_in_bytes" : 8716288,
          "used_in_bytes" : 1019440,
          "max_in_bytes" : 8716288
        },
        "young" : {
          "peak_used_in_bytes" : 69795840,
          "committed_in_bytes" : 69795840,
          "peak_max_in_bytes" : 69795840,
          "used_in_bytes" : 19164000,
          "max_in_bytes" : 69795840
        },
        "old" : {
          "peak_used_in_bytes" : 1303750856,
          "committed_in_bytes" : 2060255232,
          "peak_max_in_bytes" : 2060255232,
          "used_in_bytes" : 990572400,
          "max_in_bytes" : 2060255232
        }
      }
    },
    "gc" : {
      "collectors" : {
        "young" : {
          "collection_time_in_millis" : 171762,
          "collection_count" : 6665
        },
        "old" : {
          "collection_time_in_millis" : 17094,
          "collection_count" : 14
        }
      }
    },

Why is my memory use only ever increasing?

Thanks in advance!

This is the way Java works. The JVM will keep allocating heap until it runs out. When it runs out it will run garbage collection which will free up some of the heap. Then it will start allocating heap again.

Thanks Badger!
I have just one more question: as far as I understand we set xms and xmx to same value to avoid overhead of allocating more memory on runtime.

I set xmx=2g and xms=2g. Why is my pod's memory usage not corresponding to this?

Is my pod memory showing the actual heap usage?

This controls the size of the Java heap. For most applications this dominates the memory usage of the JVM process, but it is possible for the JVM to use far more memory than the heap consumes. I have seen JVMs with heaps of a few GB consume dozens of megabytes of native memory.

Hi again,

It seems like logstash is ignoring the both jvm.options and LS_JAVA_OPTS's heap size settings.
I set both heap sizes to xmx=1g and it's again linearly increasing now reaching 1.1G.
It's increasing at a very constant rate -- about 20MB surge every 30 minutes.

Calling _node/stats/jvm gives me heap_used_in_bytes somewhere between 500MB to 600MB.
The actual memory use is 1.1G.

Where is the memory leaking? Any help on finding the leak spot is appreciated! Thanks!

You have not given any indication that there is a memory leak. If a JVM's heap is limited to 2GB then the JVM's overall memory use could be 2.1 GB or 100 GB.

You were right! After reaching 1.15G it stopped! Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.