Hello there, we have 3 data nodes with 7.6.1 elk version and before we add the new nodes in our cluster we want to increase the amount of shards for each node and we did it with the following directives:
The setting to use is cluster.max_shards_per_node. Be aware that the limit is there for a good reason and I would consider it quite high. If you have a lot of small indices/shards I would recommend you look into changing your sharding practices. Have a look at this old blog post for further details.
Note that shard handling has been improved in recent versions so I would recommend you upgrade to the very latest version of Elasticsearch.
thanks you for this article, i understood that amount of shards depends of heap size, but still have a question, how this setting: cluster.max_shards_per_node will be correlate with it? i mean if i want to increase max shards on each node - i should only increase heap size for example +10GB and this settings can be omit?
If you follow the guidelines in the blog post 1000 shards per node should be sufficient, so you should not need to increase it. You can increase the setting without adding heap. It is a guideline, not a hard rule, aimed at preventing oversharding, which can be very inefficient and eventually cause a lot of problems that it may be hard to recover from down the line.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.