I have 4 nodes in my cluster. All settings are default. There is no
extra shard allocation definitions.
I have an index with 3 shards +2 replicas each, for a total of 9.
I turned off one of the nodes and all were equally distributed, 1
primary and 2 replicas on each. All is well.
I turn off one more. As expected 2 primaries are now on one node.
I bring both of the missing two nodes back on-line.
- number_of_nodes: 4
- number_of_data_nodes: 4
- active_primary_shards: 3
- active_shards: 9
Everything is back as expected, but ...
From _plugin/Head Cluster State or just the URI /_cluster/state, I can
see that I have 2 primaries on one node otherwise everything is
distributed well.
(See listing below)
I was thinking that reallocation would eventually distribute the primary
shards around, not leaving two on the same node.
I thought this because I thought primaries do all the work when
querying, so having two on the same node would make that node work
harder while 2 others sit around waiting only for inserts.
Is this a correct description of searching?
Should I care, if multiple primaries are on the same node?
If so, can I make them move through some cluster update or other method?
The following is a condensed version of the routing section of
http:/.../_cluster/state?pretty=true. Note the 2 nodes with only 2
replicas and 1 node with 2 primaries and 1 replica.
-Paul
"routing_nodes" : {
"unassigned" : [ ],
"nodes" : {
"dQtKNcrvTJm75TRZF8-6Jg" : [ {
"primary" : true,
"shard" : 0,
}, {
"primary" : false,
"shard" : 1,
}, {
"primary" : true,
"shard" : 2,
} ],
"XYDqhIA7QD-R0EsUkaepaA" : [ {
"primary" : false,
"shard" : 0,
}, {
"primary" : false,
"shard" : 2,
} ],
"SdYrPmJDR7KP43woxLVRYA" : [ {
"primary" : false,
"shard" : 1,
}, {
"primary" : false,
"shard" : 2,
} ],
"gEDcSprISSCMaJcoykUq4Q" : [ {
"primary" : false,
"shard" : 0,
}, {
"primary" : true,
"shard" : 1,
} ]
}
},
I was hoping that reallocation would notice the excess primaries on one
node and move one of them (I don't care which) to one of the other nodes
that now
--