Hello,
My Logstash is Consuming too much JVM due to it Logstash is getting shut down and not receiving any data.
I have 4 Pipelines running in my Logstash and
Can Some one suggest me What all steps to followed for tunning it
Thanks in Advance
Hello,
My Logstash is Consuming too much JVM due to it Logstash is getting shut down and not receiving any data.
I have 4 Pipelines running in my Logstash and
Can Some one suggest me What all steps to followed for tunning it
Thanks in Advance
Can Please someone advice me
Regards
It's pretty obvious you have a memory leak. Ever higher heap utilization after GCs, long GC pauses once the heap fills until an OOM causes a restart. I suggest you read this.
Hello @Badger,
Thank you for the response.
How to enable heap dump
Can You Please guide me
I am Using
Centos Linux 7
**
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
**
and to collect JVM heap dump
bash: jmap: command not found...
Regards
I would try -XX:+HeapDumpOnOutOfMemoryError to see if your JVM supports that.
Hello @Badger,
Thank you for the response
I would be thankful If You guide me to solve this issue
Regards
You should place it on the JVM command line, in jvm.options, for example.
OK, so when your JVM runs out of memory it should generate a heap dump and you will need to analyze it. I suggest you keep the heap small so that the heap dump is manageable. If you JVM runs out of heap at 8 GB it will be a lot easier to work with than if it runs out at 48 GB.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.