Elasticsearch nodes get killed by kubernetes due to OOM

I have installed ES cluster on my k8s cluster and no matter how much RAM limit i give to the nodes, they eventually get killed. The load on the cluster is low (approx. 50 docs/min).

I have used the default template for Elasticsearch operator with 3 nodes (all masters + data).
Monitoring shows that memory is constantly growing for the particular container and when it reaches the limit, kubernetes kills it. I have tried with limits of 6Gi, 12Gi, 40Gi...
The kubernetes node has 188GB RAM and it seems to me like the container does not care about the limit and wants to use all of it. Does this have to do with lucene's memory mapped files and how do you solve this?

Hi @Bozo_Tegeltija,
Can you share your Elasticsearch cluster yaml specification?
Also can you give more details about your Kubernetes cluster specifications (underlying OS & kernel, vendor, cloud provider, etc.)?

We suspect there are some issues with CentOS7 based hosts.

1 Like

It is CentOS, with an older kernel version. Thanks a lot for insight!