I have a 11 node ES cluster. Each node has
- 4 core cpu
- Total memory 28GB (To ES i allocated 20 GB for each node)
- 1 OS disk 2TB , 2 attached data disk of 2TB => total 6 TB each ES node
Total cluster disk space 6 * 11 -> 66 TB Currently i have almost 25 TB data with
1. 224 Indices 2. 1059 shards 3. I have nested document so for each record multiple entries is created , currently total number of docs inserted are 596915451283 docks (596 bn records)
For each node settings applied :-
**/etc/elasticsearch/elasticsearch.yml** discovery.zen.ping_timeout: 15s discovery.zen.publish_timeout: 60s discovery.zen.fd.ping_interval: 5s discovery.zen.fd.ping_retries: "3" discovery.zen.fd.ping_timeout: 15s discovery.zen.join_timeout: 1m discovery.zen.minimum_master_nodes: "1" bootstrap.memory_lock: true indices.fielddata.cache.size: 5% indices.queries.cache.size: 2% indices.breaker.fielddata.limit: 50% **/etc/elasticsearch/jvm.options** all default config only added to demand 20gb mem from OS for ES -Xms20g -Xmx20g **/etc/default/elasticsearch** MAX_LOCKED_MEMORY=unlimited MAX_OPEN_FILES=65536 **/usr/lib/systemd/system/elasticsearch.service** LimitMEMLOCK=infinity under [Service] tag **/etc/security/limits.conf** elasticsearch soft memlock unlimited elasticsearch hard memlock unlimited
ulimit -n 65536
So i think , i have applied all the ES optimisation config i found over the blog and document by ES
Problem I am facing :- Even if there is no query is running at all ,But yes there is write operation happening from Kafka ElasticSearch connector say every 30 sec interval. ES is burning (17-18 GB) 90% of heap out of 20GB allocated.
Tell me like
1. I am missing any config which need to be tuned. 2. If i am not missing , then do we have only option to upgrade ES cluster (updating the current node or adding the new node) 3. If its JVM thing need to tuned so how i can do that.