I have a logstash pipelines that runs tasks every day on a fixed time using the http input.
Each task completes executing (from input of first pipeline to outputs of the last one)
within few minutes.
Rest of the day it's supposedly idle.
The logstash instance is deployed on kubernetes pod.
When I monitor the pod's memory usage it's forever increasing since the initialization.
Over the span of 2 weeks the memory use has been linearly growing -- almost in a straight line-- from 750MB to 1.8GB.
The input plugins in use are:
- http_poller
- dead_letter_queue
- jdbc
The filters:
- ruby
- elasticsearch
- mutate
- split
- fingerprint
- date
- translate
- prune
- http
The outputs:
- s3
- amazon_es
- jdbc
On my jvm.options file heap size looks like this:
-Xms1g
-Xmx1g
On my pod the env LS_JAVA_OPTS is set:
-Xms2g
-Xmx2g
This is my jvm status from node stats API
"mem" : {
"heap_used_percent" : 47,
"heap_committed_in_bytes" : 2138767360,
"heap_max_in_bytes" : 2138767360,
"heap_used_in_bytes" : 1010755840,
"non_heap_used_in_bytes" : 217426408,
"non_heap_committed_in_bytes" : 257306624,
"pools" : {
"survivor" : {
"peak_used_in_bytes" : 8716288,
"committed_in_bytes" : 8716288,
"peak_max_in_bytes" : 8716288,
"used_in_bytes" : 1019440,
"max_in_bytes" : 8716288
},
"young" : {
"peak_used_in_bytes" : 69795840,
"committed_in_bytes" : 69795840,
"peak_max_in_bytes" : 69795840,
"used_in_bytes" : 19164000,
"max_in_bytes" : 69795840
},
"old" : {
"peak_used_in_bytes" : 1303750856,
"committed_in_bytes" : 2060255232,
"peak_max_in_bytes" : 2060255232,
"used_in_bytes" : 990572400,
"max_in_bytes" : 2060255232
}
}
},
"gc" : {
"collectors" : {
"young" : {
"collection_time_in_millis" : 171762,
"collection_count" : 6665
},
"old" : {
"collection_time_in_millis" : 17094,
"collection_count" : 14
}
}
},
Why is my memory use only ever increasing?
Thanks in advance!