Uneven Primary Shard Distribution

Hi, I'm experiencing a problem to do with uneven primary shards across datanodes, for some reason, all the primary shards are just going to the same datanode, regardless all the datanodes are running on the same version and also not caped on resources.

Currently, I just manually reshuffle the shards to get the loads off that node, hopefully, anyone help can give some help and information regarding to this.

I have attached screenshots of the stats of the cluster, any help/information would be much appreciated

_cluster/settings

{
"persistent": {

},
"transient": {
"cluster": {
"routing": {
"allocation": {
"enable": "all",
"exclude": {
"_ip": "10.0.0.1,10.0.0.2"
}
}
}
}
}
}

Can you explain what problem you are seeing with primary shards being on one node and why you think you need to spread them properly? Why do you think that primary shards have a higher load than replicas? Both need to do the same indexing work and both are used for searching.

Hi Alexander,

After careful read at your questions, I think the questions I asked is wrong and misleading. Instead, I should have asked why the shards distribution is uneven and why all the primary shards are going to the same node. sorry about that ...

When the time I have problems with ingestion, seems like the majority of shards are going the same node, and all shards on that node are primary, so that's why I naively think to myself primary shard create more load.

A primary and a replica shard have exactly the same load on indexing. A document gets indexed first on the primary and if successful on all of the replicas, doing the same amount of work.

Regarding the balanced shards. It may happen that all your primary shards end up on the same node, if you start/stop nodes, so that replica relocation happens on the other nodes. The shard allocation decision is based on a number of factors, so pretty hard to squeeze in a forum post.

The problem is in the step you mention about being indexed on the primary first.

This is a bottle neck for indexing as all the requests to index and update are sent to a single node.

The bulk update queue fills up, and ultimately on my system we see rejections. When a node goes down, the all primaries are concentrated to another node.... then this node is bombarded..... and it continues until the cluster is in poor health.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.