I have the cluster with 3 masters and 10 data nodes. Settings per index include 5 shards and 2 replicas. From what I see the data nodes are loaded unevenly. I can see that shards for the same nodes are not located on the all nodes, but the 2 or 3 shards of the same index are stored on the same node and that node is loaded for the 100% of CPU while other 9 nodes are doing nothing. I believe I missed something in settings. Should increase the shards number per index? Is it possible to configure the cluster to locate only one shard of the same index per node, so it will be guaranteed that 5 primary shards of the index will be on the 5 different data nodes?
Will be grateful for any clues.
But, in your case, 10 shards and 10 nodes, you would break HA, a shard couldn't recover if a node is missing. (if set to 1 anyway)
thanks for the url, very useful.
But I didn't get about HA breaking. I have 10 nodes. Each index has 5 shards and 2 replicas, so it 15 shards per index (5 primary and 10 replicas). I see it should be changed, but not sure in which way. What would you recommend?
Oops, I missed the 2 replicas.
As I understand this setting, say you had an index with 4 shards and 1 replica, total 8 shards, set the option to 1, 2 of your nodes wouldn't contain any shards for this index. If a node containing a shard failed, that shard could be recovered to one of those 2 nodes.
With 15 shards and 10 nodes, you could set the value to 2, it would prevent 3 shards on the same node, but not 2. (Remember a replica and primary will always be on different nodes, so it would have another option for search)
I've always wondered, in your case, what 2 replicas vs. 1 accomplishes. If you had more nodes, it would spread search, but you only have 10 nodes for search. Do you think your load is search or index? How big are the shards in GB?
ok, so if I will keep 10 nodes and 5 shards per index with 1 replica it will allow me to configure option with 1 and then on each node I'll have 1 shard or replica from each index, right? If yes, it would be the optimal redundancy and fast search? Or I'm wrong?
Thanks for helping.
It depends on a number of factors. What is your use case? Will you be using time-based indices or have a fixed number of indices? How many concurrent queries do you need to support? Do you have strict query latency requirements?
I have a fixed number of indices ~ 900, I see that I had ~600 search requests max during last month, but I believe my cluster should easily support at least one query to each index at the same time - so around 900 queries per unit of time. I have no latency requirements, but I would go with as good as possible. Not sure my answer is clear enough.
What is your average shard size? Given the amount of indices I would assume they are quite small. As you have a large number of indices and want to support a reasonably large amount of concurrent queries (one per index in parallel) I would recommend using a single primary shard per index as long as this means the shard size is below 30GB-50GB. This means that not all nodes hold a copy of a shard for all indices, but load should even out given the large number of indices you have so that should not be a problem.
To be honest, no idea how to get the average shards size, but I found that there are around 5k shards with ~300 with size more that 1G and 10Gb max. So you still recommend to decrease shards quantity from 5 to 1? I though that bigger shards quantity would decrease the CPU load per node?
But I'm still beginner with ES, so maybe I'm completely wrong. So if its better to decrease the shards from 5 to 1, so I would need that number of nodes then?
And small next question: the only way to decrease shards settings per index is Reindex?
9000 shards across 10 data nodes is a lot, so I would recommend you reducing that. If you serve few concurrent queries having more shards per index may improve query latency, but if your shards are very small the additional coordination may very well mean that querying a single primary shard is not necessarily much slower than querying 5. If you however expect to need to support a large number of concurrent queries having fewer primary shards per index is likely going to be better in my experience. I would recommend reindexing or shrinking your largest index down to a single shard and see how query latencies change. Based on this and the expected load you will be able to determine what is best for your use case.
thanks a lot, very helpful
But maybe just one more question: if I would decrease the number of shards per index it means that only 2 nodes (in case of 1 primary shard and 1 replica) will be busy for the single index. And if the shards for big indices will be on the same 2 nodes then only those 2 nodes will be busy (loaded), but other nodes in the cluster will do nothing. For fewer shards I need to add some smart balancing in cluster to spread the shards to the nodes, right? Or maybe I'm thinking in the wrong direction?
It all depends on your query patterns and expected query concurrency. If you just serve a single request fewer shards will distribute load less but if you have many queries running in parallel against different indices the nodes may still be quite heavily loaded even with a few shards per index. I think you will need to test to find out.