Primaries shards not evenly balanced accross the nodes - Is it normal behavior?

Hello,

While doing some tests, I noticed that after creating an index (with no data) in a five (data) nodes cluster, primaries shards are not evenly balanced :

GET _cat/nodes

192.168.2.158 67 96 94 2.47 1.56 0.77 dlmrt - elastic2
192.168.2.107 23 94  1 0.15 0.25 0.26 ilr   - logstash1
192.168.2.106 29 89 96 2.08 1.60 0.81 dlmrt * elastic1
192.168.2.143 64 95  1 0.26 0.22 0.14 dlrt  - elastic5
192.168.2.114 57 94 35 1.56 0.79 0.37 dlmrt - elastic3
192.168.2.252 59 67 31 1.82 0.86 0.38 dlrt  - elastic4
192.168.2.42  23 94  0 0.14 0.20 0.18 ilr   - logstash2
PUT temp3
{
  "settings": {
    "index": {
      "number_of_shards": 3,  
      "number_of_replicas": 0,
      "refresh_interval": "-1" 
    }
  }
}
GET _cat/shards/temp3

temp3 1 p STARTED 333048 173.2mb 192.168.2.158 elastic2
temp3 2 p STARTED 462817 191.2mb 192.168.2.106 elastic1
temp3 0 p STARTED 432296 186.7mb 192.168.2.158 elastic2

Is it normal that I have two primary shards on the same node (elastic2) ?

Thanks for your feedback !

That's correct. Elasticsearch tries to balance the total number of shards. The decision is not made per index.

Very clear. Thank you David !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.