Hey Guys im currently running an elastic-search container on my docker where im running a local development area, which all seemed fine at first but at one point i got the error in the title or to be exact:
2025-04-17 05:39:34.481 ERROR [BufferedIncrement-default-1][BulkDocumentRequestExecutorImpl:48] failure in bulk execution:_[0]: index [liferay-60813834493731], type [doc], id [com.liferay.document.library.kernel.model.DLFileEntry_PORTLET_32314], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [liferay-60813834493731] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]][1]: index [liferay-60813834493731], type [_doc], id [com.liferay.document.library.kernel.model.DLFileEntry_PORTLET_32314], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [liferay-60813834493731] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]] [Sanitized]
Now i have looked at the other simmilar posts and problems, but either im too stupid to understand whats going on or i forgot something
You are getting close to running out of disk space and Elasticsearch has made the index read only in order to prevent you from corrupting your data by running out of disk space.
Incrtease the amount of disk space or delete some indices. You can also change the watermark settings, but be aware that these are there to protect you (would not recommend this).
It looks at the amount of free disk space compared to the total disk space available. If you are running Elasticsearch on a host with a lot of other components it may be worthwhile changing the watermarks to sizes in GB instead of percentages, but make sure you are not too aggressive as that can lead to complete data loss.
Remember (or learn) that Elasticsearch needs to rewrite a lot of content sometimes (merging segments). This needs temporary some disk space. That's why 94% of disk usage means "full" to Elasticsearch as there's a risk it can't rewrite what is needed.
Just for the sake of completeness, from 8.5.0 onwards it's a little more complicated than this (see #88639). If the disk is larger than ~2TiB then Elasticsearch only block writes once the free space drops below 100GiB, which may be much more than 95% full. For instance on a 10TiB volume you can get all the way up to 99% full before hitting this disk watermark.
But in any case yeah 33GiB of free space is worryingly close to "full" in Elasticsearch terms. A force-merge of a shard at the standard size of 50GiB will run you out of disk space just for the temporary storage.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.