I deployed multi-node cluster few days back and it was working fine. Today I saw
elasticsearch service went down due to less memory availability. It is running fine on other two elasticsearch nodes.
System has 16G of memory. Around 1.5G free memory was available.
We have set to reserved 5G of memory for elasticsearch in (
/etc/elasticsearch/jvm.options) but since its not available, it's failing to start.
Dec 07 21:08:29 itfoobpnoneuapp3uat elasticsearch: # There is insufficient memory for the Java Runtime Environment to continue. Dec 07 21:08:29 itfoobpnoneuapp3uat elasticsearch: # Native memory allocation (mmap) failed to map 5195956224 bytes for committing reserved memory.
I see same jam setting for logstash in (
/etc/logstash/jvm.options) to reserve 5G heap memory.
Q 1. Does that mean
Elasticsearch Service will use/reserve its own 5G memory seperately (which will not be available to anyone, including OS?)
logstatsh service will also use/reserve its own 5G memory which will not be available to use for anyone ?
If this is true that means out of 16G memory, 10G will be reserved by these two components which will not be available for anyone?
elastic stack will be mainly used for checking of logs in kibana and some uptime monitoring, based on this, how much memory should we reserve for logstash. (I know its change case by case)
Q 2. As a best practice, we should allocate 50% of memory to elasticsearch (here 8G) but how much we should allocate to logstash if we are using ELK for above scenario.