I'm running 4 basic pipelines on logstash with persisted queue. Everything ran smoothly, but the output went down for a certain amount of time, and the queue filled up, up to the 1GB limit. When the output went back online, logstash isn't able to work properly and flush the queue to the output : even if it is configured to use 1GB of RAM (default jvm.options file), it uses more than 5GB, and end up getting killed by the OOM killer. I tried tweaking it, letting it use 5GB, 300MB, etc, but whatever I do, it seems that logstash simply ignores the RAM limit I gave him, and end up using too much RAM, util it get killed.
I face this issue on both logstash on premise, and on docker.
How would I be able to control RAM usage, to prevent logstash from being killed on a machine that has ~5GB of RAM free, and still keeping the persisted queue ? Is there anything I have to do on jvm.options, or on the queue page size options ?
Thanks in advance for your help,