Retrying individual bulk actions that failed or were rejected by the previous bulk request

Hi @Bernt_Rostad ,
Thanks for your reply , after i checked logs for logstash and elasticsearch the result is as

[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,715][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,716][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,716][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/1
[2018-07-04T10:16:33,716][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>125}

and elasticsearch logs is

[2018-07-04T10:13:45,050][INFO ][o.e.c.r.a.AllocationService] [JkvjWj8] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[metricbeat-6.3.0-2018.06.28][0]] ...]).
[2018-07-04T10:14:04,213][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.2%], shards will be relocated away from this node
[2018-07-04T10:14:04,214][INFO ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2018-07-04T10:14:34,246][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.2%], shards will be relocated away from this node
[2018-07-04T10:15:04,298][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.2%], shards will be relocated away from this node
[2018-07-04T10:15:04,298][INFO ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2018-07-04T10:15:34,327][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.1%], shards will be relocated away from this node
[2018-07-04T10:16:04,355][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.1%], shards will be relocated away from this node
[2018-07-04T10:16:04,355][INFO ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2018-07-04T10:16:34,383][WARN ][o.e.c.r.a.DiskThresholdMonitor] [JkvjWj8] high disk watermark [90%] exceeded on [JkvjWj8PQmGioaUsLTrYIw][JkvjWj8][/var/lib/elasticsearch/nodes/0] free: 4.4gb[9.1%], shards will be relocated away from this node

is this cause of hard disk ?