Hi All,
I just compared the jvm heap usage from kibana and visualvm and realized, that Kibana shows incorrect values.
LS: v.5.2.2
LS_HEAP_SIZE=512m
Charts:
What's wrong the monitoring data from LS?
Thanks!
Hi All,
I just compared the jvm heap usage from kibana and visualvm and realized, that Kibana shows incorrect values.
LS: v.5.2.2
LS_HEAP_SIZE=512m
Charts:
What's wrong the monitoring data from LS?
Thanks!
Have you checked the ps
tree to see what the actual Xms and Xmx are set to? LS_HEAP_SIZE
isn't used anymore. Heap is set in the jvm.options
file now.
yes, i checked the process list with the Process explorer (it's on windows )
the heap-size variable works for some reason:
"c:\progra~1\java\jdk\jre\bin\java.exe" "-Djdk.home=c:\progra~1\java\jdk" "-Djruby.home=C:\programms\logstash\vendor\jruby" "-Djruby.script=jruby" "-Djruby.shell=cmd.exe" "-Djffi.boot.library.path=C:\programms\logstash\vendor\jruby\lib\jni;C:\programms\logstash\vendor\jruby\lib\jni\i386-Windows;C:\programms\logstash\vendor\jruby\lib\jni\x86_64-Windows" "-Xss2048k" "-Dsun.java.command=org.jruby.Main" "-Djava.class.path=" "-Xbootclasspath/a:C:\programms\logstash\vendor\jruby\lib\jruby.jar" "-Xmx512m" "-XX:+UseParNewGC" "-XX:+UseConcMarkSweepGC" "-XX:+CMSParallelRemarkEnabled" "-XX:SurvivorRatio=8" "-XX:MaxTenuringThreshold=1" "-XX:CMSInitiatingOccupancyFraction=75" "-XX:+UseCMSInitiatingOccupancyOnly" "-XX:+HeapDumpOnOutOfMemoryError" "-XX:HeapDumpPath="C:\programms\logstash/heapdump.hprof"" org/jruby/Main "C:\programms\logstash\lib\bootstrap\environment.rb" "logstash\runner.rb" "-f" "C:\projects\logstash.conf" "--path.settings" "C:/etc/logstash"
Ah, Windows. The jvm.options
file doesn't work in Windows. If LS_HEAP_SIZE
works for now, great. It might go away in the future. In that case, you should instead edit setup.bat
to use the JVM settings you want.
To address your concern, the blue line (marked as "UsedHeap") is tracking your heap usage. The pink line (marked as "MaxHeap") is not quite 2x your set heap size. Under some conditions, the Xmx may be exceeded by the JVM, and this arrangement helps us to show what's going on in such a case. As you will note, though, the blue line is staying comfortably below the 512m mark, and is doing garbage collection very regularly. Perhaps a bit too rapidly, in fact. I would suggest doubling your heap size to 1g to see if it helps. Your event processing sawtooth pattern will be more even with fewer garbage collection events from Logstash.
but if you compare the heap usage size - the visual VM shows, that the heap size stays below 220m, and the LS statistics shows almost doubled values (around 400m).
To be honest, I'd trust the visualVM data
To say nothing of the empty CPU usage chart...
We'll investigate. Truthfully, it may have something to do with metric collection on windows, and the libraries being used. On my own (linux) system, I see the same behavior with the JVM, and the GC is triggered exactly at the Xmx threshold.
ah, ok, it's good to know, that on Unix it works as expected.
thank you!
Are you using a container, or VM perhaps? Or is this running in a single Windows instance?
no VM, it's a "normal" host system - win10
I just spoke with a team member who has been addressing this. It's a known issue with Windows, and he said he'd get a fix out soon.
Also, apparently math is fun: https://github.com/elastic/logstash/issues/6608
ok, I'll start my research with github next time
waiting for v.5.4
Thank you again for you time!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.