Logstash Error: Retrying Failed Action With Response Code 403

I'm receiving the following error in Logstash:

[2019-04-01T14:43:45,454][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-04-01T14:43:45,454][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-04-01T14:43:45,455][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-04-01T14:43:45,455][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-04-01T14:43:45,455][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-04-01T14:43:45,455][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-04-01T14:43:45,455][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>69}

The closest I can find is this post and this post, which seem to indicate that it might be related to a disk space issue. However, I'm only at ~80% consumption on the partition where I'm storing my ES data. Additionally, I don't appear to be getting any errors in my ES logs.

Additionally, I do not have SSL configured for ES (and made certain that I didn't accidentally specify it for my beats.conf file). This config worked fine up until sometime late on Mar. 28, and I haven't made any changes since then, so I really don't understand the 403/Forbiddent response code.

The only thing I can think of is that somehow all of my indices got set to read-only, and I have no idea how that would happen. Is there an easy way to iterate through them to unset the RO?

Proceeding on the theory that all indices somehow got set to read_only_allow_delete, I executed the following command iterate through all indices and reset that flag:

Edit: Buggy script removed

It did not resolve the issue.

Update

OK...a little embarassing here. As I looked back through what I posted yesterday, I noticed that I had a typo in the curl statement. It read: curl -s -X PUT -H "<header>" "<json>" -d "<url>".

I updated it as follows:

#!/bin/bash
for i in $(curl -s -X GET http://localhost:9200/_cat/indices | awk -F ' ' '{print $3}' | sort)
do
    echo Updating ${i}: $(curl -s -X PUT -H "Content-Type: application/json" \
                               -d '{"index.blocks.read_only_allow_delete": null}' \
                               "http://localhost:9200/${i}/_settings")
done

...which unlocked my indices as expected and allowed log entries to start flowing again. I'm still uncertain why this occurred, however, as I'm unable to see any errors on the ES side of things.

5 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.