I deployed multi-node cluster few days back and it was working fine. Today I saw elasticsearch service went down due to less memory availability. It is running fine on other two elasticsearch nodes.
System has 16G of memory. Around 1.5G free memory was available.
We have set to reserved 5G of memory for elasticsearch in (/etc/elasticsearch/jvm.options) but since its not available, it's failing to start.
-Xms5288m
-Xmx5288m
Dec 07 21:08:29 itfoobpnoneuapp3uat elasticsearch[59415]: # There is insufficient memory for the Java Runtime Environment to continue.
Dec 07 21:08:29 itfoobpnoneuapp3uat elasticsearch[59415]: # Native memory allocation (mmap) failed to map 5195956224 bytes for committing reserved memory.
I see same jam setting for logstash in (/etc/logstash/jvm.options) to reserve 5G heap memory.
Q 1. Does that mean Elasticsearch Service will use/reserve its own 5G memory seperately (which will not be available to anyone, including OS?)
and logstatsh service will also use/reserve its own 5G memory which will not be available to use for anyone ?
If this is true that means out of 16G memory, 10G will be reserved by these two components which will not be available for anyone?
elastic stack will be mainly used for checking of logs in kibana and some uptime monitoring, based on this, how much memory should we reserve for logstash. (I know its change case by case)
Q 2. As a best practice, we should allocate 50% of memory to elasticsearch (here 8G) but how much we should allocate to logstash if we are using ELK for above scenario.
Elasticsearch requires both heap and off-heap memory. It is recommended to assign 50% of the memory available to Elasticsearch to heap. If Elasticsearch is running alone on a host this is half of the total amount of RAM available. If other services are also running on the host the RAM available to Elasticsearch is however reduced and the heap size will need to shrink as well.
Elasticsearch requires memory in addition to what is allocated for heap so if you set the heap to 5GB you should assume Elasticsearch will require an additional 5GB in order to operate efficiently. It does store data off heap and also requires a certain amount of operating system page cache to be available. Have a lot ok at this blog post for more information. There is also a useful section in the docs.
Thanks for your reply. does this true for logstash also? we have set 5GB for logstash also.
will it also take 5G as heap memory and additional 5G to operate efficiently.
5GB for Logstash sounds like a lot. I would probably start at 2GB and only increase if necessary. I do not think the same applied to Logstash as it does not hold a lot of data.
I'd like to govea hit on this. The answer is Yes.
Generally any Java process lunches has it's own heap configuration and this heap is for object life cycle management. So in your case since you are running multiple components take care of over-committing memory to java processes.
Elasticsearch shouldn't be failing due to OOME issues since it has circuit breaker limited to 70% by default. more info Here.
Elasticsearch can also fail if it is not able to allocate off-heap memory in addition to the heap, and for this I do not think circuit breakers will help.
I suggest for production, you need to seperate the service like kafka, zookeeper and tomcat into another node.
Java also need computing power, kafka need memory to handle the messages, in the other hand, tomcat also need computing and memory depending on the application that deployed in the WAR.
Check the system performance using htop or top to get the insight of the CPU, Mem usage.
If more swap use, just seperate the service to other nodes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.