And there seems also a kind of "calculation" Formula ( for each node stays below 20 per GB of heap) which definetely is not correct but could help to get an Idea.
I facing now the problem...that the es-cluster gets dynamically new indexes which mostly are only used from time to time. This means the amount of "active" shards grows more and more. I know i could increase the cluster with new nodes.
This is a fairly general recommendation coming from the fact that a lot of users tend to get into trouble by over-sharding, which can cause a lot of problems. Some of it is related to heap usage, but it is also related to the size of the cluster state (keeping track of shards and mappings) and the time it takes to apply and propagate changes.
There have been changes to heap usage in the latest versions which may make heap usage less of a concern though.
This does not help as much as it used to in older versions as the cluster still keeps track of closed shards and heap usage has been improved. If indices are only sporadically queried and not changed once created it may be a better option to forcemerge them down to a single segment and then freeze them. That way they are available as read-only and use very little resources and you do not have to reopen to search.
oh wow thanks for that detailed answer. This helps me a lot in understanding but it leads me into additional questions
Sounds awesome which exactly? 7.8 or the upcoming 7.9?
Ah good to know i saw already the freeze/unfreeze API. It seems to be an X-PACK feature but i do not get which license in the end is needed to use it. This side Abonnements | Elastic Stack-Produkte und -Support | Elastic
does not refer the freeze API. Any Idea?
I mean freeze sounds even better than just closing. But for resource consumption is it simliar? Or is freeze even more resource saving?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.