You will usually see this error when your node(s) reach their flood stage disk level.
When Elasticsearch detects a node's disk is nearing being full, it sets any index that the node holds to a read only state to protect the data that it has in these indices. Because a read only state cannot be applied on a shard level, you may receive this error from a node that is not nearing it's watermark level, so make sure you check your other nodes.
The default flood stage watermark level, which is 95% of the total size of the disk Elasticsearch has identified as its path.data
, can be altered dynamically. You may want to manually to set this if you have larger disks, as 95% of 4TB is approx 110GB, or increase it temporarily to allow you to delete the index and free up space:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "90%",
"cluster.routing.allocation.disk.watermark.low.max_headroom": "100GB",
"cluster.routing.allocation.disk.watermark.high": "95%",
"cluster.routing.allocation.disk.watermark.high.max_headroom": "20GB",
"cluster.routing.allocation.disk.watermark.flood_stage": "97%",
"cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": "5GB",
"cluster.routing.allocation.disk.watermark.flood_stage.frozen": "97%",
"cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": "5GB"
}
}
PUT */_settings?expand_wildcards=all
{
"index.blocks.read_only_allow_delete": null
}
The steps to troubleshoot the allocations as given in the Fix watermark errors documentation.
Once the disk space issue has been resolved, you can set the cluster or back to a writeable state using these call;
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": null,
"cluster.routing.allocation.disk.watermark.low.max_headroom": null,
"cluster.routing.allocation.disk.watermark.high": null,
"cluster.routing.allocation.disk.watermark.high.max_headroom": null,
"cluster.routing.allocation.disk.watermark.flood_stage": null,
"cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": null,
"cluster.routing.allocation.disk.watermark.flood_stage.frozen": null,
"cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": null
}
}
PUT /INDEXNAME/_settings
{
"index.blocks.read_only_allow_delete": null
}
NOTE - Elasticsearch will treat the null
value above as a request to remove the index.blocks.read_only_allow_delete
value against the index, thereby making it writable.