ES index goes read-only without the node reaching the disk watermark

We are running an ELK cluster (v7.11.2) in kubernetes, and we noticed that the cluster went read-only without the underlying node reaching the disk watermark. We don't see any trace of error in the ES logs, and neither any outlying metric preceding the event.
The disks are at most 50-55% capacity [1].

Our setup:
filebeat daemonset -> redis (1 pod) -> logstash (2 pod) -> elasticsearch (2 masters, 2 ingresses, 3 data statefulsets)

Redis serves as a buffer in case ES stops ingesting.
After we detected the issue, we removed manually the read-only status from the index [2], and ES resumed indexing approx. 8 million logs without an issue.

My questions are:

  • In what conditions is it possible that Elasticsearch puts an index in read-only mode (aside from the disk watermark)?
  • Logstash is running on warning log level. Why is it that we only see logs about ES going read-only when we set the log level to debug?


GET _nodes/stats/fs | jq '.nodes[] | .name,'


PUT _all/_settings {"index.blocks.read_only_allow_delete": null}
PUT _all/_settings {"index.blocks.write": false}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.