Yet another "FORBIDDEN/12/index read-only" message

Yet another message of this type. I read past similar messages but none really help me.
This alert pops up in Kibana when I try to create any new index.
The hard drive has enough space, this is not the reason.

My thought was that this is a filesystem permission but if it is, I can't find where.

Config: Error 403 Forbidden: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];

Any advice appreciated.
Best,
JD

Can you have a look at the Elasticsearch cluster logs and check if there are any warnings there? In particular, can you grep for "flood stage disk watermark"?

Hello, thank you for the reply. After reading previous posts the first thing I did was check disk usage and I was far from 50% on all the mount points.

Also - I could not continue this way and reverted the ELK VM to a previous snapshot, then re-upgraded.
It would be great to know what causes this for other people though, but it is not disk space.

the question is what were disks at the point when elasticsearch decided to put this block in. It's the only reason this could happen as far as I know.

This is a test machine, nothing happens on it aside from my logstash code. It is at all times around 50% disk-space, no spikes or changes, and no one else touches the VM, the error was repeatable.

If disk space is the only reason for this error, then I would suggest this could be a bug. If someone in Elastic is interested (for troubleshooting reasons) I can go back to the "broken" Logstash, which I keep as a VM snapshot

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.