Disk usage exceed in Elastic search after logstash upgrade

Hi Elastic Team,

kindly help with below issue. FYI we have seen this error after logstash upgrade. i.e 7.16.2 and upgrade has been perfomed to fix the log4j vulnerability. please suggest us to fix this problem.

error=>{"type"=>"cluster_block_exception", "reason"=>"index [wbpreregistrationeasttwo-2021.11] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];"}}

Cluster settings mentioned below.

{
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"cluster_concurrent_rebalance" : "2",
"node_concurrent_recoveries" : "2",
"disk" : {
"watermark" : {
"low" : "25.0gb",
"flood_stage" : "1.0gb",
"high" : "22.0gb"
}
},
"node_initial_primaries_recoveries" : "4"
}
},
"blocks" : {
"create_index" : "false"
}
},
"indices" : {
"recovery" : {
"max_bytes_per_sec" : "125mb"
}
},
"opendistro" : {
"index_state_management" : {
"metadata_migration" : {
"status" : "1"
},
"template_migration" : {
"control" : "-1"
},
"allow_list" : [ "delete", "transition", "rollover", "close", "open", "read_only", "read_write", "replica_count", "force_merge", "notification", "snapshot", "index_priority", "rollup", "cold_migration", "cold_delete", "warm_migration" ]
}
},
"search" : {
"max_buckets" : "10000"
},
"plugins" : {
"index_state_management" : {
"allow_list" : [ "delete", "transition", "rollover", "close", "open", "read_only", "read_write", "replica_count", "force_merge", "notification", "snapshot", "index_priority", "rollup", "cold_migration", "cold_delete", "warm_migration" ]
}
}
},
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"cluster_concurrent_rebalance" : "2",
"node_concurrent_recoveries" : "2",
"disk" : {
"watermark" : {
"low" : "25.0gb",
"flood_stage" : "1.0gb",
"high" : "22.0gb"
}
},
"exclude" : { },
"node_initial_primaries_recoveries" : "4",
"awareness" : { }
}
}
},
"indices" : {
"recovery" : {
"max_bytes_per_sec" : "125mb"
}
}
}
}

Regards,
Abhishek

Your flood stage is configured to trigger when you reach 1 GB of free space, the error you are receiving means that you cluster reached this level and you cannot write on it until you free space up.

You will need to delete some data from your cluster and reset the read-only index according to the documentation.

Also, take a note that you seem to be using Opendistro, which is not supported here.

This is correct, you will need to ask aws as it's their product.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.