According to the official documentation, when the disk space reaches 85%, replica allocation becomes challenging, at 90% replicas start getting relocated to other nodes (but since I have only two nodes and the principle of not placing primary and replica shards on the same node, replicas should not be allocated to another node, right?), and at 95%, all indices on the current node are set to read-only. If my cluster consists of just two nodes, and there's a substantial difference in disk storage space, does it mean that when the smaller disk node gets filled up, the entire cluster becomes unavailable? Or does it mean that when the smaller node reaches 95%, data continues to be written to the larger shard, with the only impact being the inability to create replicas, but it doesn't affect the usability of the entire cluster?
With a two node cluster where you have a big disk node (
BIG_NODE) and a small disk node (
SMALL_NODE), the watermark will work something like this, considering that the
SMALL_NODE will be affected first.
Low Watermark, 85% of disk usage on
SMALL_NODE: will stop allocating shards on the node, this does not impact primary shards of new nodes, but no replica shards will be allocated on
High Watermark, 90% of disk usage on
SMALL_NODE: would try to relocate shards away from the node, since you have only two nodes, it will not be able to move shards away from the node and it cannot allocate new shards, so new indices will not have a replica allocated anymore.
Flood Stage, 95% of disk usage on
SMALL_NODE: will set every indice that has at least one shard on this node as read-only, since you have only two nodes and elasticsearch balance the shards between the nodes, this will impact new writes on every indice, except new indices that were created on
SMALL_NODEreached the low watermark.
So in resume, if the
SMALL_NODE reaches the flood stage, any indice that have one shard in the
SMALL_NODE, be it a primary or replica shard, will be set to read-only until you free some space.
But this is no the main issue, the main issue is that a two-node cluster is not resilient to failures, it can have only one master-eligible node, and if you lose the master node, the entire cluster will be unavailable until the master node is back, so in this case it does not make much sense to use replicas when using a two node cluster.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.