Disk usage exceeded flood-stage watermark

Hello everybody,

I need some help on conception and settings.
I have a cluster of 6 nodes running each one an elasticsearch server instance.
An error was logged in logstash log file :

TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark,
index has read-only-allow-delete block

After checking, it appears that on one of the 6 nodes, almost 400Go are used for storage out of 430. So more than 90%.

So my questions are is there a way to "archive" a part of the indexed documents to reduce used disk space, what are the best practices and how do you proceed to deal with large data volume

Thx a lot for your recommendations.

Take a look at Data management | Elasticsearch Guide [8.3] | Elastic

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.