Shard allocation - primaries allocated on same data nodes

Hello everyone!

I have a cluster, that I'm on the process of setup everything to go to production. I have six data nodes and 3 masters. I don't know why but for some reason that I cannot identify the shards are allocated not every shard in every available data node. Look at this example:

id                     shard sc     dc    sto n    pr ur
F1uGxNwOQEyTps-rFVoxyQ 4     12 217034 24.6mb es05 p  
QL8h_hvuRU21XPKjl9Qc3g 3      4 216525 23.9mb es06 p  
F1uGxNwOQEyTps-rFVoxyQ 2     12 216258 26.8mb es05 p  
QL8h_hvuRU21XPKjl9Qc3g 1      6 216319   24mb es06 p  
QL8h_hvuRU21XPKjl9Qc3g 5     12 217263 23.9mb es06 p  
F1uGxNwOQEyTps-rFVoxyQ 0      7 216258 23.8mb es05 p 

I set up this attribute (cluster.routing.allocation.same_shard.host) trying to force elastic, when creating new index (and its shards) allocate every primary shard on a "dedicated" data node. But this is not happening and every replica on a different data node also. My index template is using 6 primaries and 1 replica.

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.same_shard.host": "true"
  }
}

I was wondering that elastic would do this "balance" by default since we don't want to have same shard allocated on the same data node. Is there any specific configuration I have to do to elastic do this? Do I need to use shard allocation awareness?

I'm using elastic 7.7.1.

Thanks!!

Look at this image. I have the nodes but the majority of the processing is happening on two nodes.

This is a recently created shard. almost all primary shards are allocated on two nodes.

id                     shard sc     dc    sto n    pr ur
C7mw9mtwT9OmZ787IOKLOw 1      7 436970 47.9mb es01 r  
LpEzk671SuqdDFE_JYmdCA 3      5 438570 47.5mb es02 r  
YpNbjM6NTem0OS1aLy7nLA 0      5 437816 46.3mb es03 r  
3qKMd8-VTGqX8ZNLRbD6vw 2     11 437355 46.2mb es04 r  
F1uGxNwOQEyTps-rFVoxyQ 4      8 438884 47.1mb es05 p  
F1uGxNwOQEyTps-rFVoxyQ 2     13 437355 46.3mb es05 p  
F1uGxNwOQEyTps-rFVoxyQ 5                      es05 r  INDEX_CREATED
F1uGxNwOQEyTps-rFVoxyQ 0      6 437863 46.2mb es05 p  
QL8h_hvuRU21XPKjl9Qc3g 4                      es06 r  INDEX_CREATED
QL8h_hvuRU21XPKjl9Qc3g 3     11 438709 47.4mb es06 p  
QL8h_hvuRU21XPKjl9Qc3g 1     11 437299 47.7mb es06 p  
QL8h_hvuRU21XPKjl9Qc3g 5     10 439489 52.9mb es06 p 

Thanks!!

Why do you want this? It's not something we recommend forcing.

Hi warkolm, thanks for your reply. This is exactly what I dont want. I dont want to have two (or more) primaries of the same index on the same host. And this is what is happening. Any ideas on why this is happening? Thanks

The cluster can promote replica shards to become primary shards if nodes are lost or unavailable so this is something you can not completely control. If a node becomes unavailable or is restarted shards on other nodes will be promoted to primary shards. When the node rejoins the cluster it will get replica shards assigned but will likely hold no primary shards.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.