ES heap space is always at 90% for all the 11 nodes having 20GB JVM space allocated (out of 28 GB) with 4 core CPU

I have a 11 node ES cluster. Each node has

  1. 4 core cpu
  2. Total memory 28GB (To ES i allocated 20 GB for each node)
  3. 1 OS disk 2TB , 2 attached data disk of 2TB => total 6 TB each ES node

Total cluster disk space 6 * 11 -> 66 TB Currently i have almost 25 TB data with

1. 224 Indices
2. 1059 shards
3. I have nested document so for each record multiple entries is created , currently total number of docs inserted are 596915451283 docks (596 bn records)

For each node settings applied :-

discovery.zen.ping_timeout: 15s
discovery.zen.publish_timeout: 60s
discovery.zen.fd.ping_interval: 5s
discovery.zen.fd.ping_retries: "3"
discovery.zen.fd.ping_timeout: 15s
discovery.zen.join_timeout: 1m
discovery.zen.minimum_master_nodes: "1"

bootstrap.memory_lock: true
indices.fielddata.cache.size:  5%
indices.queries.cache.size: 2%
indices.breaker.fielddata.limit: 50%

all default config only added to demand 20gb mem from OS for ES


LimitMEMLOCK=infinity under [Service] tag

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

ulimit -n 65536

So i think , i have applied all the ES optimisation config i found over the blog and document by ES

Problem I am facing :- Even if there is no query is running at all ,But yes there is write operation happening from Kafka ElasticSearch connector say every 30 sec interval. ES is burning (17-18 GB) 90% of heap out of 20GB allocated.

Tell me like

1. I am missing any config which need to be tuned.
2. If i am not missing , then do we have only option to upgrade ES cluster (updating the current node or adding the new node)
3. If its JVM thing need to tuned so how i can do that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.