I need some help on conception and settings.
I have a cluster of 6 nodes running each one an elasticsearch server instance.
An error was logged in logstash log file :
TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block
After checking, it appears that on one of the 6 nodes, almost 400Go are used for storage out of 430. So more than 90%.
So my questions are is there a way to "archive" a part of the indexed documents to reduce used disk space, what are the best practices and how do you proceed to deal with large data volume
Thx a lot for your recommendations.