Is there any settings that specify the maximum number of shards for all indexes for one node?
My simplified scenario is that I have two machines, one with 8 CPUs and one with 4 CPUs, and one index with 10 shards. Now, the shards goes 5 to one machine, 5 to other one and this causes inequality on load of these machines (There is a continuous indexing flow and the slow machine has a constant 7 load and the fast machine has a load of 3).
The answer of this problem is to assign 7 shards to the fast machine and 3 shards to other., but I don't know how to solve this.
(I can't modify settings for the cluster and also I can't assign shards manually on one machine - This scenario is part from a bigger cluster, with different types of machines).
So, my question is how to set maximum shards per index per one specific one (Basically, having this setting will help me to set a maximum number of 3 shards on the slow machine).
I'm not sure if what you're asking for is doable but it's certainly not smart. If you only have two servers both servers must have the full set of your index data (1 primary on one server being mirrored by 1 replica on the other) or else your cluster will be in a permanent state of yellow. And that is never a good thing; if one server drops out of such a cluster it means some primary shards will not be available anymore - causing indexing to fail and searches to return partial results.
What you could do is set up anther weak server to help out in the cluster, then the two weak server could share the same load that the strong one carries alone. I think you could make this setup work by specifying different zone attributes in your elasticsearch.yml files on the strong and the weak serves - for instance:
cluster.routing.allocation.awareness.force.zone.values: strong,weak
cluster.routing.allocation.awareness.attributes: zone
node.attr.zone: strong
and on the two weak ones:
cluster.routing.allocation.awareness.force.zone.values: strong,weak
cluster.routing.allocation.awareness.attributes: zone
node.attr.zone: weak
Elasticsearch will then try to spread the primary shards evenly across the two zones, with 5 in the strong zone (1 server) and 5 in the weak zone (2 servers). Henceforth your weak servers will have a lighter load than the strong one.
Then your cluster lacks robustness and if one shard, for some reason or other, should become corrupt you have no backup (replica) to take over as new primary, meaning all data in that shard will be lost. It's not a healthy situation if this is critical data.
As for the 3-node 2-zone setup, I think it will work even if you have no replica shards, spreading the 10 primary shards across the two zones so that 5 gets assigned to the strong server and the remaining 5 to the two weaker ones.
If you only have two servers to play with I can only think of one way that will make Elasticsearch force the stronger server to handle more than half the primary shards and that is by tricking the Disk-based shard allocation mechanism:
If you set the low watermark to a high value and then reduce the free disc space on the weak server so that it drops below the value set in
cluster.routing.allocation.disk.watermark.low value
then no new shards will be assigned to this server - only to the strong server (which I assume to have more free disc). If the shards have already been allocated you will need to force them to be moved from the weak to the strong by also breaking the limit set by
cluster.routing.allocation.disk.watermark.high
whenever a node in a cluster uses more disc than the high watermark setting, Elasticsearch will actively relocate one or more shards from that server to a server with more free space.
But I would still suggest the 3-node solution. And using replicas, of course.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.