Elasticsearch Memory Consumption Spikes in Kubernetes Deployments

We are deploying the open source version of Elasticsearch 7.9.3 in an HA configuration in kubernetes. The cluster is using the default configuration of 1 GB of heap, and there is no data currently in the cluster. When the cluster is started up, the memory usage can often spike up to 6GB, as we have seen in our monitoring tools. Several minutes after, the memory usage stabilizes back to 1.3 to 1.4GB, which is what we would expect.

We do not see anything in the logs that indicate why this is happening. The following are settings that we configure in elasticsearch.yml:

"transport.type":               "netty4",
"action.auto_create_index":     false,
"thread_pool.write.queue_size": 500,
"indices.fielddata.cache.size": "15%",

"cluster.routing.allocation.disk.watermark.low":         "500mb",
"cluster.routing.allocation.disk.watermark.high":        "500mb",
"cluster.routing.allocation.disk.watermark.flood_stage": "500mb"

We have an identical deployment in Docker containers that are deployed directly on the host machine, but this issue does not occur in that situation.

Has anyone experienced this before or have any ideas as to what could be happening? Let me know if there are also any other logs or configurations I could provide.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.