FORBIDDEN/12/index read-only / allow delete (api)] : indexes are set to "read only mode"


I have deployed ELK stack (version 7.2.0) on kubernetes, and everything was working just fine till I got this error " [FORBIDDEN/12/index read-only / allow delete (api)]]" while doing some stuff on kibana. What I understood is that when the disk availability reaches the 5% limits , elasticsearch will turn all the indexes to "read only mode".
I fixed the issue using these curl commands:

** curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

However at that time, only 80Gb were used out of possible 150Gb (disk availability at least 40%).
So I was wondering how is that possible ?
And is there a permanent solution to avoid this kind of conflict?

This is super hard to debug from remote. In general the actions you have taken (setting the read only setting back to null) are good. Not sure if you want to keep the disk threshold decider disabled.

The main task would be to figure out why the decider decided to set this read-only. If this happens again using the allocation explain API - make sure you include disk info and yes decisions here.

did you have monitoring enabled by any chance? How did you check for those 40%? It could be that some merge was being executed shortly before that, so that there was some peak (note, that the removal of that index setting still requires manual intervention).



Thank you for the quick update I really appreciate it @spinscale.

Well, at first i was only setting the read only setting back to "null", however, after a short time, it's set to true again on its own. For that reason I had to disable the disk threshold decider.
Furthermore, I am not sure how can I exploit the "allocation explain API" ??
Finally, I was relying on "stack monitoring" on kibana to deduct that 40%, i also run some "GET" on the console to verify disk availability.

Looking forward to hearing back from you