Unavailable_shards_exception reason=> filebeat primary shard is not active


(Swaroop Chandre) #1

Hello can anyone help me with this. Getting following error in logstash logs

[2017-09-05T23:47:53,660][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-2017.09.05][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-2017.09.05][2]] containing [27] requests]"})
[2017-09-05T23:47:53,660][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-2017.09.05][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-2017.09.05][2]] containing [27] requests]"})  
[2017-09-05T23:48:55,665][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-2017.09.05][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-2017.09.05][2]] containing [6] requests]"})
    [2017-09-05T23:48:55,666][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reas`Preformatted text`on"=>"[filebeat-2017.09.05][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-2017.09.05][2]] containing [6] requests]"})emphasized text

Current index state

GET _cat/indices

red    open filebeat-2017.09.06 DTJrafm-QLWl_5eaeGSiWg 5 1                      
red    open filebeat-2017.09.05 Jco7ejvORgakQrAmoHI8Gg 5 1 2574255 0   2gb   2gb
yellow open .kibana             NHtLrUyfSEOvsdRRyutbyg 1 1       2 0 8.1kb 8.1kb

Had earlier deleted all index. Would that be the cause for this error? How Do I resolve this now?


(Mark Walkom) #2

What do your Elasticsearch logs show?


(Swaroop Chandre) #3

Thank you for the response. Elastic shows the following output.

[2017-09-06T00:44:41,901][WARN ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] high disk watermark [90%] exceeded on [Z2K_poxOQuigXVHSSazkKQ][Z2K_pox][/var/lib/elasticsearch/nodes/0] free: 36kb[6.3E-4%], shards will be relocated away from this node
[2017-09-06T00:44:41,901][INFO ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-09-06T00:45:11,913][WARN ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] high disk watermark [90%] exceeded on [Z2K_poxOQuigXVHSSazkKQ][Z2K_pox][/var/lib/elasticsearch/nodes/0] free: 12kb[2.1E-4%], shards will be relocated away from this node
[2017-09-06T00:45:41,925][WARN ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] high disk watermark [90%] exceeded on [Z2K_poxOQuigXVHSSazkKQ][Z2K_pox][/var/lib/elasticsearch/nodes/0] free: 16kb[2.8E-4%], shards will be relocated away from this node
[2017-09-06T00:45:41,925][INFO ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-09-06T00:46:11,937][WARN ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] high disk watermark [90%] exceeded on [Z2K_poxOQuigXVHSSazkKQ][Z2K_pox][/var/lib/elasticsearch/nodes/0] free: 4kb[7E-5%], shards will be relocated away from this node
[2017-09-06T00:46:41,955][WARN ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] high disk watermark [90%] exceeded on [Z2K_poxOQuigXVHSSazkKQ][Z2K_pox][/var/lib/elasticsearch/nodes/0] free: 12kb[2.1E-4%], shards will be relocated away from this node
[2017-09-06T00:46:41,955][INFO ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-09-06T00:47:11,967][WARN ][o.e.c.r.a.DiskThresholdMonitor] [Z2K_pox] high disk watermark [90%] exceeded on [Z2K_poxOQuigXVHSSazkKQ][Z2K_pox][/var/lib/elasticsearch/nodes/0] free: 8kb[1.4E-4%], shards will be relocated away from this node

How do I resolve this ?

Regards,
Swaroop


(Mark Walkom) #4

Add more disk space.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.