High flood state disk watermark results in read only indexes

I have 1TB SSD on my notebook with 15GB free space at the moment. Although ES node data take only 130MB, all indexes inluding the .kibana are marked as read-only and any further work is blocked.

I would consider that as a design flaw. Why is the disk watermark not related to the volume of the ES data, i.e. 130MB vs 15 GB?

Changing flags in the following way does not help because the ES returns them back in a few seconds.
PUT /_all/_settings
{
"index.blocks.read_only_allow_delete": null
}

Why is the disk watermark not related to the volume of the ES data, i.e. 130MB vs 15 GB?

The watermarks are intended to be configured based upon what you know about your system. This is a case where there is no "best" default, but the default flood stage watermark is currently chosen is intended to protect a large number of users because running out of disk space is a not-very-fun experience. It defaults to 95% of the total disk size. If you have a different number that's better for your system, you can set it in the cluster.routing.allocation.disk.watermark.flood_stage setting rather than just trying to fight with the read_only_allow_delete flag.
See Disk-based shard allocation | Elasticsearch Guide [8.11] | Elastic for the configuration

Well, thank you for the link. The possibility to use absolute size limits of volumes solves my issue. In fact, I have solved it by moving the ES node data to another 50GB almost empty partition where the ES relative limits work. Nevertheless, if I consider an 8T HDD for which 400 GB must remain free to accomplish any small/tiny ES task with default relative limits, it does not sound well for me (even it is a generally accepted strategy of disk space warnings and it works well if an ES node works with partitions dedicated just to the ES node). Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.