Hello
I am having memory allocation and saturation issues with logstash. My logstash configuration has memory allocation of 20GB set in the jvm options(-Xms20g -Xmx20g) but after starting the service the total memory allocated is not 20GB but starts in the low 1 GB and steadily increases upto 20 GB over a period time. The pipeline processing is quick initially but once it reaches the maximum allocation 20GB the processing of logs slows down dramatically. I have to restart logstash and it picks pace again. I have persistent queue enabled. My questions is
- When installing logstash in windows is it supposed to allocate the 20 GB once the service is started ? But in ES the allocation is done to the value specified in the jvm.options when the service starts.
- Why does the log processing saturate after reaching the maximum allocation ? Not sure if the GC is kicking in or not.
My configuration
3 logstash instances 8core CPU and 30 GB RAM
pipeline.workers: 16
pipeline.output.workers: 8
pipeline.batch.size: 1000
Pipeline has 8 filebeat inputs and outputs which writes to multiple time based indexes around 40 based on the incoming data.
6 data nodes in ES. I don't see any errors in the ES nodes.
Let me know if further details are required to answer my question.