Index in red, UNASSIGNED PRIMARY_FAILED

I am using elasticsearch 5.4.0. One of my index is in red state and I have no idea how to fix it.

_cat/shards/logstash-ngccapplogs-2017.06.08-1?v&h=index,shard,prirep,state,unassigned.reason

index                             shard prirep state      unassigned.reason
logstash-ngccapplogs-2017.06.08-1 8     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 8     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 6     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 6     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 1     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 1     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 9     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 9     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 10    r      STARTED    
logstash-ngccapplogs-2017.06.08-1 10    p      STARTED    
logstash-ngccapplogs-2017.06.08-1 7     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 7     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 11    p      STARTED    
logstash-ngccapplogs-2017.06.08-1 11    r      STARTED    
logstash-ngccapplogs-2017.06.08-1 3     p      UNASSIGNED ALLOCATION_FAILED
logstash-ngccapplogs-2017.06.08-1 3     r      UNASSIGNED PRIMARY_FAILED
logstash-ngccapplogs-2017.06.08-1 4     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 4     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 2     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 2     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 5     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 5     r      STARTED    
logstash-ngccapplogs-2017.06.08-1 0     p      STARTED    
logstash-ngccapplogs-2017.06.08-1 0     r      STARTED

Check your logs, it should mention something.
Otherwise use the allocation API to see why it's not allocating.

Also unless you have massive indices, 600GB+, then having 12 shards is a little excessive.

Thanks for the reply Mark. My daily index is of size 1 to 1.4 TB and I am having 25 data notes 2TB each, so I used 12 shards per index. I used the cluster allocation explain API and found that:

GET /_cluster/allocation/explain
{
  "index": "logstash-ngccapplogs-2017.06.08-1",
  "shard": 3,
  "primary": true
}

"index": "logstash-ngccapplogs-2017.06.08-1",
  "shard": 3,
  "primary": true,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "ALLOCATION_FAILED",
    "at": "2017-06-10T19:13:32.164Z",
    "failed_allocation_attempts": 5,
    "details": "failed to create shard, failure IOException[No space left on device]",
    "last_allocation_status": "no"
  }

That time disk got full and shard didn't got created. I suppose I can't recover from this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.