Fix Unallocated Shards

Good day,

I was hoping you could help us with the issue that we are are consistently encountering in Elasticsearch. We are running a single cluster with a single node in the Production environment and the indices are failing due to unallocated shards.
Our temporary fix was to close each failed index every time but this is not helping us at all because a few of these indices are only three weeks old.
Appreciate your help. Thanks in advance.

We have tried the following to resolve:
1. Set replicas to 0
2. Enable shard allocation
2. re-route unallocated shards
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{ "commands" :[ { "allocate" : { "index" : "test", "shard" : 0, "node": "aIiS2_OQRdid6MjBvuXGMg", "allow_primary": "true" }}]}' -H 'Content-Type: application/json

3. Force re-allocation
curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
"commands" : [ {
"allocate_stale_primary" :
{
"index" : ".test", "shard" : 0,
"node" : "aIiS2_O",
"accept_data_loss" : true
}
}
]
}' -H 'Content-Type: application/json'

Elasticsearch Health:

Allocation log

From the screen dump you provide it seems the problem is that you've exceeded your file system's max open files value. Since every shard needs open file handles for its file segments you won't be able to get that shard assigned until Elasticsearch can open more file handles.

You will either have to increase the max open files setting, see Configuring system settings, or you can reduce the number of open shards in your cluster by using Shrink or Reindex APIs to to create indices with fewer shards.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.