I see the normal settings cluster.routing.allocation.disk.watermark.low, cluster.routing.allocation.disk.watermark.high, and cluster.routing.allocation.disk.watermark.flood_stage.
But I also, see cluster.routing.allocation.disk.watermark.flood_stage.frozen and cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom.
My assumptive answer to my question, is that this is currently not actually possible, and that the 2 frozen settings actually refer to the old "frozen" concept instead of the new frozen node role, but I figured I'd ask here first.
Context, with the new node roles hot, warm, cold, frozen. I want to be able to set different high disk usage watermarks for each role, as cold and frozen could have substantially larger disks than hot or warm, and a single % value across all node types doesn't make that much sense. I kind of want to avoid using static byte values, as even 100GB on a 16TB disk vs a 1TB is a significant difference.
No My Understanding is that these settings ARE for the new Frozen Node type used with Searchable Snapshot. If you look at the profiles on Elastic Cloud as a Model on AWS
The 60GB is an aws.es.datafrozen.i3en with a 4TB SSD and Supports about 93TB of S3.
(Dynamic) Controls the max headroom for the flood stage watermark for dedicated frozen nodes. Defaults to 20GB when cluster.routing.allocation.disk.watermark.flood_stage.frozen is not explicitly set. This caps the amount of free space required on dedicated frozen nodes.
And You can read more about the headroom and searchable here
Ah, thanks @stephenb for that clarification on the frozen settings.
Though, given that the settings are either all cluster.routing.allocation.disk.watermark.(low|high|flood) or cluster.routing.allocation.disk.watermark.flood_stage.frozen*, I'm guessing the complete answer, is that there isn't node role-based disk watermark, with the exception of the frozen node role, which has its own settings?
@BenB196 Yes I do not see role specific... which interestingly we were just looking for too for a specific use case where we had large Warm Nodes and did not need as much buffer.
I asked internally if I get anything back I will let you know
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.