after some posts and good answers we optimize our Elasticsearch.
Actually we use:
6 x hot nodes a 32 gb ram / heap space a 16 gb
22 x cold nodes a 16 gb / 8 gb space
We reduce 8 primaries shards to 6 shards
At the moment we see lower shards and less cpu usage. But we thought that less ram usage will be the result. But it raise day by day.
I would recommend you upgrade your cluster to version 8.7 as some significant improvement around heap usage have been made.
Reducing the number of primary shards does not necessarily reduce heap usage a lot. It is, at least on the version you are on, important to also forcemerge down to a single segment.
As I understood, with less segments we need less memory.
Besides the update we're planning, we didn't try to forcemerge to a single segment.
But which processes need memory besides the heap space?
I see in our dashboard that many metrics declining, but with no impacts on the entire memory usage....
Elasticsearch uses the heap, but also some off-heap memory. In addition to the the operating system page cache is essential for performance and it is common to see all memory in use on an Elasticsearch host.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.