I am unable to login into Kibana when my elasticsearch cluster is getting full.
The log will contain:
org.elasticsearch.xpack.monitoring.exporter.ExportException: CLusterBlockException: blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete-block]
Altough my elasticsearch volume still has 2.9GB of 35GB available.
Any idea what I should do?
When Elasticsearch is running out of disk space, there's a configurable limit when indices are set to read only(
cluster.routing.allocation.disk.watermark.flood_stage) . This is the reason why you can no longer login into Kibana. You would need to reconfigure those flood stages, here is an explanation:
Hi @matw, but since login is readonly, I don't understand with login would disabled?
The cluster is practically unuseable web-wise if the storage gets full, which turns into a Denial-of-Service vulnerability if you're running something important such as a SIEM.
Kibana stores data in
.kibana prefixed indices, and before 7.10 login was not always read-only, and in 7.10+ it's never read-only (we create session).
Generally could try to reset readonly setting by
What error message is displayed when you try to login in the flooded state?
Thx & Best;
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.