Hi, we were having 2 nodes one with master and data and the other with data and no master, now we have added an additional 3rd node into the cluster with data and no master. So what happened now is that the indices are currently writing the data only to the newly added third node and the first 2 nodes are not have any shards located.
I want the data/shards to be allocated to all the 3 nodes, is there any issue with the configs? Can you please assist.
So you only have two nodes that are masters? That has nothing to do with your problem, but you really should have at least three master-eligible nodes in order to avoid split brain situations.
Hi Magnus,
This was my earlier setting,
node1 - master =yes, data=yes
node2 - master =no, data=yes
with the above settings the shards are split and written to indices on both the nodes.
now i have added additional node with this config and the above settings remains intact,
node3 - master =no, data=yes
After that all the shards are being allocated to node3 but node1 and node2 does not have any shards for indices while storing.
Thanks, Hema.
But shards on node1 and node2 are being redistributed to node3, aren't they? With the default settings Elasticsearch strives to equalize the number of shards on each node.
HI Magnus, Yes that is what i thought that the shards will be distributed among the 3 nodes equally. But as of now it is only writing to node3. node1 and 2 does not have the data nor the indice is created in the dir.
Yes, but what is the total number of shards on each machine? I don't think ES has any guarantees about newly created indexes being distributed evenly, only that the total number of shards is balanced.
Hi Magnus, The total number of shards are 5 for a indices, and all nodes has the default of 5 shards.
From within the graylog i have asked to open only 5 shards/ indice. Before the node3 was added it was like 3 shard on node1 and 2 shard on node2. Now it is like all 5 shards sits in node3 alone, the rest 2 nodes does not get any data.
Currently a total of 999 shards are open. When node 3 was added 1 shard on all the open indices were added to node3 by ES, and now all the 5 are writing to node3 alone. I also worry on how this affects the search performance and might not be searchable when node3 goes down or the data loss.
If you have 999 shards in a two-node cluster and add another node, ES should begin to rebalance those shards so that each node eventually holds 333 shards each. If this doesn't happen I'd look into why not.
If you want the cluster to be resilient against nodes going down you should make sure all nodes are eligible to become masters. Otherwise it's game over if your master node goes down.
Thanks Magnus, I will change the setting to make all the 3 nodes as master. Also i figured out why the shards are writing to only 1 indice, its because of more space available on that node. So i corrected the cluster setting for disk allocation to be false which later that the shards were spread on all the 3 nodes.