The memory usage of data nodes in our elastic-cluster is increasing rapidly, we have 2-master nodes and 6-data-nodes, indices are 57 and shards is 115, which is using around 27.5Gb of data of the total given to the data nodes(1.2tb). CPU usage is normal around 1-2% for all the nodes.
Can you suggest some ways to look into the memory usage or how can we handle the usage of the memory in nodes?
If you are looking to have a highly available cluster you need 3 master eligible nodes.
If heap usage is looking OK it is perfectly normal for Elasticsearch to consume all memory on the host through the operating system page cache. Memory is there to be used and the page cache is critical for Elasticsearch performance. If some other process (you should however ideally run Elasticsearch on dedicated hosts) need memory the operating system will reduce the size of the cache and make memory available that way so it is not a problem.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.