Understanding watermarks

Hi,

I understand (and please correct me if I'm wrong ), that:

When a node reaches low disk watermark, shards will no longer be allocated on it.
And when a node reaches high disk watermark, shards will be attempted to move from it to other nodes.

In ES 5.3.2:
Does reaching any of the disk watermarks cause any additional effects? For example, do indices become read only at some point? If so, at what point?

Also, please answer the same question regarding ES 6.2.4.

Thank you

No, that feature was added in 6.0.

Yes, when the flood_stage watermark is reached.

I see, thanks.

Also, is my understanding of the watermarks correct?

In my cluster, a new index had a shard allocated on a node which passed low watermark. Is this possible?

Yes that's possible because of #6196: a brand-new empty primary might be allocated on a node above the low watermark. The reason for this is that you might pass the low watermark on all nodes without realising it. If new primary allocation were blocked and you're using daily indices you might get to midnight and create the new day's indices, and your cluster health would immediately go red with all new indexing failing. By allowing brand-new empty primaries to be allocated above the watermark the cluster health goes yellow and continues to accept new data which gives you a chance to deal with the low disk space before it becomes catastrophic.

I see. So let me see if I understand the rules (for ES 5.x):

When low watermark is reached on a node, shards will not be relocated to it. But new empty primary shards can still be allocated to it.

When high watermark is reached on a node, shards will be attempted to move from it to other nodes, and also - new primary shards will not be allocated on it. (Meaning for example if all nodes in the cluster are above high watermark, creating a new index will make the cluster health red).

Is this correct?

That sounds roughly right. I don't want to say it's 100% right because the only 100% accurate description is the code itself:

Also v5.3.2 and v6.2.4 are so far past the end of their supported life that I don't have a development environment for them any more.

There are some other complexities around tracking the sizes of shards that are currently relocating, which may be further complicated by this bug and this bug and possibly more.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.