I have installed ES cluster on my k8s cluster and no matter how much RAM limit i give to the nodes, they eventually get killed. The load on the cluster is low (approx. 50 docs/min).
I have used the default template for Elasticsearch operator with 3 nodes (all masters + data).
Monitoring shows that memory is constantly growing for the particular container and when it reaches the limit, kubernetes kills it. I have tried with limits of 6Gi, 12Gi, 40Gi...
The kubernetes node has 188GB RAM and it seems to me like the container does not care about the limit and wants to use all of it. Does this have to do with lucene's memory mapped files and how do you solve this?