3 node ES cluster...one node only holds replicas

(Emmett Hogan) #1

Today I had to restart my cluster to change some configuration settings.

I think I did it the right way... following these instructions:


However, after I turned shard allocation back on, the third node in my cluster has only replicas, no primary shards....and the second node has primaries only, no replicas. I know that I am still covered if I have a node failure, but I am confused as to why ES would have reallocated my shards in this manner. This is after 3+ hours of being up...and I have a pretty small dataset:

Indices: 54
Total Shards: 348
Unassigned Shards: 0
Documents: 113,476,598
Data: 128GB
Uptime: 3 hours
Version: 2.2.0

Can anyone shed some light on it for this ES newcomer?



(Anh) #2

I think ES does not balance primary/replica shards evenly among nodes, and that's normal. If you indexing data only to one node, all primary shards may allocated on that node. I saw similar behavior, and there's nothing to worry about.

(Emmett Hogan) #3

Ok...I just thought that it was strange that it "chose" to put only replicas on one of the nodes and only primaries on another.

Thanks for the info!


(Mark Walkom) #4

It's somewhat random. You can force the primaries to be spread out by disabling replicas, letting things reallocate, then adding them back.

(Anh) #5

Hi Mark,

Would indexing be faster if all primary shards locate on the node that receives indexing data?

(Mark Walkom) #6

Depends :stuck_out_tongue:
You might run into contention for example, but unless you are doing massive volumes, it's unlikely.

(Christian Dahlqvist) #7

Primary and replica nodes do the same amount of work, so it should not matter.

(Anh) #8

So when indexing, an index request has to be completed on both primary and replica to be considered a finished request?

(Mark Walkom) #9

Yeah, it does.

(Anh) #10


(system) #11