I'm aware that such blocks are placed by Elasticsearch itself when it reach certain conditions (like the watermark setting or the jvm heap usage %).
I searched around the docs but found nothing, so I ask here, Is there a list of causes for cluster blocks?, Where can we look to check why the block was applied?
We had very frequent logstash stops because we were ingesting old data (messages, audit, ... ). When we deploy a new VM from a template, filebeat ships logs from the template creation date (months ago) which cause them to be placed in very old indices which were frozen by ILM.
Adding a ruby filter to drop any log older than 30 days for indices containing server logs fixed the issue.
First thing to check is the specific blocked index you're writing to and try to figure out the origin of the blocking log/entry.
There are two cluster-wide blocks and they are documented here. Neither of them is automatically applied. There are also a number of index blocks (search for blocks on that page) only one of which (read_only_allow_delete) is managed by Elasticsearch based on disk usage. Elasticsearch does not add any blocks based on heap usage.
Other blocks may be applied as a consequence of your actions or configuration (e.g. freezing an index makes it readonly, as do some ILM actions).
Cool, that would answer the main question, thanks!
I keep wondering though, Where to look for to determine which kind of block is applied?, I was thinking in the Elasticsearch log file, but it is flooded with other stuff so I kinda need some kind of hint word or message to look out for, or if there is another log file that registers this kind of events.
Cluster-wide blocks are shown in the cluster settings and index-level blocks are shown in the index settings. Also if your logs are "flooded with other stuff" then that sounds like a problem you should address: in a healthy cluster the logs should be pretty quiet.
Yeah, I know, the issue is that we use Elasticsearch for logging and we handle all dev teams logs, so a lot of them just send anything and we have a lot of type conflict errors and general misuse of index restrictions, mostly, that's why we have the logs flooded with "user-based" errors.
Anyway, your hints were helpful, I'll mark the question as solved, thanks a lot!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.