Elasticsearch process is killed by OOM killer

Hi,

We have a 32GB c5.4xlarge ( 4 nodes cluster ) for the elastic search and the elastic search process is getting killed after a period of 3 weeks and there is no enough memopry left to restart the process. So had to manually clear the cache and reduce the heap memory settings to get the process started.

localhost kernel: [2444278.080081] elasticsearch[1 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

Xmx and Xms is 15G and GC settings as below.

-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

Other settings:

bootstrap.memory_lock: true
MAX_LOCKED_MEMORY=unlimited
MAX_OPEN_FILES=65536
MAX_MAP_COUNT=262144
MAX_THREADS=8192

Can you advice whether any other settings to be made to avoid the OOM killer to crash the elasticsearch process.

Thanks,
A

Which version of Elasticsearch are you using? Older releases in the 7.x series allowed off-heap allocation of up to the heap size; this has been reduced to half the heap size in releases newer than 7.3 IIRC. You may want to reduce the -Xms and -Xmx values from its current values of 15GB.

Hi Magnus,

Thanks.

Im using Elasticsearch 7.5. is the rule of allocating ~50% of available RAM is not reqd for this version so elastic will have enough for in-memory processing ?

Can i reduce this to 30% of total RAM ?

Thanks,
A

Sure. Unless your nodes really need a large amount of heap space (e.g. for large aggregations, or to ingest very big documents) you can reduce the allocated heap space. Make sure you monitor heap space in Kibana's monitoring app to detect when memory gets tight. This typically shows up as the heap size being at or higher than the occupancy fraction for long periods of time.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.