I am running CentOS, and I have ELK(ElasticSearch, Logstash, Kibana) and Graphite, Graphana on this VM.
When I run top I can see ES is the culprit
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
899 elasticsearch 20 0 6110m 1.2g 35m S 100.0 31.3 1262:28 java
I don't know why it started acting like this. I was told hot threads will help but I am new to ES and need help to understand it.
Will it help if I reduce to shards, by archiving them since I can restore it to a local ES when it is needed to see historical data?
What is a healthy number for a single node?
We may increase the node number too I guess, if that is causing the CPU usage.
Yes we are using logstash and keeping history, hence the large numbers I think.
Where can I make the said changes? elasticsearch.yml or is there a logstash config file that I need to change?
The person who knows elasticsearch is not with us anymore, so I am using your help and Google
I grepped the elasticsearch process and it is like "/usr/bin/java -Xms256m -Xmx1g -Xss256k" which limits its memory usage to 1gb. I still don't know what 256m and 256k does though
Correct me if I am wrong but increasing the limit should fix my problem right?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.