Index setting for blocks.read_only_allow_delete keeps changing

Hi,

I'm trying to delete and re-ingest a large amount of data. I set up a script to do this and things were going fine, deleting an old daily index and re-copying data to a file input folder until everything stopped. I checked the logstash-plain.log and saw a familiar message "FORBIDDEN/12/index read-only / allow delete (api)".

But I thought I had already dealt with this, I've seen this message before and followed steps on this forum (FORBIDDEN/12/index read-only / allow delete (api)]) to change all of them to blocks.read_only_allow_delete: false

I also created an all template with a order of 100000, index_pattern: * to set everything with this setting.

According to this blog https://benjaminknofe.com/blog/2017/12/23/forbidden-12-index-read-only-allow-delete-api-read-only-elasticsearch-indices/ this happens because disk space fills up but path.data is not full nor are either of my file input locations.

Any ideas what's going on and how I can stop this?

Thanks!

It doesn't happen when you run out of disk space, it happens by default when you have less than 5% of disk space available. That's what I think is happening here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.