The documentation for cluster.routing.allocation.disk.watermark.flood_stage
mentions that the whole index will be put in read-only mode if one shard exceeds the limit.
Is there another way to handle this scenario ? One of my node has much less disk space, and I don't want this node to be a bottleneck. Still, I want to use its resources for my analysis.
This is the disk space available on my nodes:
node 1: 14 TB
node 2: 5 TB
node 3: 14 TB
node 4: 14 TB
I already set the index settings to 7 shards, so that node 1, 3 and 4 hosts two shards and node 2 only one. Still, node 2 is close to its limit.
Unfortunately not. It doesn't really make sense to mark a single shard as read-only. If Elasticsearch cannot write to one shard in an index, it cannot write to the index.
Rely on watermark.high to move shards away from the full node before they hit watermark.flood_stage.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.