Retrying individual bulk actions that failed or were rejected by the previous bulk request

Hi all,

logstash logs return this error

retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];

what should i do to fix this issue ?

1 Like

could any one help me?

I've never had this issue myself but the explanation you get for the failed bulk operation seems pretty clear: The index you try to write to is read-only. The big question is why? I can only think of two possibilities:

  1. The disc where Elasticsearch stored its index data has become read-only or

  2. The index has been closed.

You should take a look at the discs in your cluster to rule out #1 and the Open / Close Index API for #2.

Hi @Bernt_Rostad ,
Thanks for your reply , after i checked logs for logstash and elasticsearch the result is as

[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,716][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,716][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,716][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>125}

and elasticsearch logs is

[2018-07-04T10:13:45,050][INFO ][o.e.c.r.a.AllocationService] [JkvjWj8] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[metricbeat-6.3.0-2018.06.28][0]] ...]).
[2018-07-04T10:14:04,213][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.2%], shards will be relocated away from this node
[2018-07-04T10:14:04,214][INFO ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2018-07-04T10:14:34,246][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.2%], shards will be relocated away from this node
[2018-07-04T10:15:04,298][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.2%], shards will be relocated away from this node
[2018-07-04T10:15:04,298][INFO ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2018-07-04T10:15:34,327][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.1%], shards will be relocated away from this node
[2018-07-04T10:16:04,355][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.1%], shards will be relocated away from this node
[2018-07-04T10:16:04,355][INFO ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2018-07-04T10:16:34,383][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.1%], shards will be relocated away from this node

is this cause of hard disk ?

As you can see in the logs, you are running out of disk space and have exceeded the high watermark, which is you are seeing these errors.

How large is your cluster? How much disk does each data node have? Which version are you running?

Yes, that could be the cause. The warning tells you what's happening:

One of more node in your cluster has passed the high disk watermark which means more than 90% of the disk is full. When that happens Elasticsearch will try to move shards away from the node to free up space, but only if it can find another node with enough space.

You need to add more disk space, either on each node or by adding more nodes to the cluster to let Elasticsearch spread the load.

@Christian_Dahlqvist @Bernt_Rostad
After i expanding my instance hard disk and become more than 150G free disk
The discover still return the same

so what should i do after increase hard disk ?

Did you reset the read-only index block as described in the docs I linked to?

1 Like

@Christian_Dahlqvist ok but how to know my current index name to put it ??

1 Like

@Christian_Dahlqvist what is your opinion on using this fix ?
https://benjaminknofe.com/blog/2017/12/23/forbidden-12-index-read-only-allow-delete-api-read-only-elasticsearch-indices/

2 Likes

That is what the docs I linked to tells you to do.

1 Like

@Christian_Dahlqvist very very thanks you works fine

@Christian_Dahlqvist I have another question could you help me in it ?

I have multiple remote filebeats and all files logs into a single ELK server but logstash get only logs for single server and every restart for logstash service it get logs from another server from the remote filebeats
what should i do to make logstash get logs from the all filebeats same time?

It should be able to get data from multiple sources, so I would recommend opening a new issue under the Logstash category and share all your configs.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.