I have looked at all the previous posts on the
[index write API block](https://discuss.elastic.co/t/cluster-block-exception-forbidden-8-index-write-api/210594). I know that this error is due to events coming in after indices have been closed. I have an ILM policy that freezes indices after 30 days. Unfortunately I have had laptops that didn't get the ignore_older filebeat configuration settings before shipping logs, so my logstash nodes have had my filebeat pipeline filled with failed retries of individual bulk actions. I have added the ruby code to drop events older than 7 days to my filter, before any other filters, and restarted the nodes several times. it appears that each time it restarts the pipeline is fine, until the 2nd logstash node is restarted. Then the cluster_block_exceptions start reappearing in the 1st logstash node.
I freeze the indices after 30 days to prevent excessive shards in the cluster, but don't want to loose the data in the event it is needed later. Is there any way to clear these failed individual bulk actions from logstash or identify what index is causing the cluster_block_exception? I am not using persisitent queues and the
/var/lib/logstash/queue directory is empty, but I can't seem to clear the pipeline and it prevents newer logs from processing in a timely manner.
I have found the
[issue reported against logstash](https://github.com/elastic/logstash/issues/10023), but it still appears to be open. i am going to open another issue against the elasticsearch-output-plugin and see if this can be resolved there.