Elasticsearch doesn't allow to allocate unassigned shard

Hi All,

Recently, my elastic search status went 'red'. Due to which I am having problems with the elasticsearch and my application. I got the following response while performing health check.

  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : true,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 3,
  "active_shards" : 3,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 12,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 20.0

Now here it shows 12 unassigned shards. As per some suggestions, I tried resolving it but unable to do so. When I do allocation explanation, I get the following message but not sure what to do.

  "index" : "291397e0dbf7e4bdad5ec5d650727621_shared",
  "shard" : 1,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2020-04-23T12:35:51.447Z",
    "last_allocation_status" : "no_attempt"
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
      "node_id" : "rpQNL_DFRrK5DS7R7KRCaA",
      "node_name" : "rpQNL_D",
      "transport_address" : "",
      "node_decision" : "no",
      "deciders" : [
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[291397e0dbf7e4bdad5ec5d650727621_shared][1], node[rpQNL_DFRrK5DS7R7KRCaA], [P], s[STARTED], a[id=U8WTgIQsSf2NKe-lWLk4Ag]]"

Any ideas or help in this regard?

What is the output of:

GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/indices?v
GET /_cat/shards?v

If some outputs are too big, please share them on gist.github.com and link them here.

Hi @dadoonet, thanks for the reply. You can find the output results on following URL.

Kind Regards,

Sounds like some primaries are not available on your disk.
Sounds like you have only one node so no replica available neither.

Which means that you need to reindex your data.

The following command will delete the entire index:

DELETE 291397e0dbf7e4bdad5ec5d650727621_shared

It will make your cluster green again, but without the data.

Any idea why you went into that situation? Did you check the previous logs?

Also 5.6.0 is a very old version.
While you are at it, start a 7.6 cluster instead.

Aaah it's very unfortunate that I have to delete the entire index. Is there no other way without loosing data?
No idea why this situation happened in first place as I can't find anything or may be I am not looking in the right place/logs. Is the logs placed anywhere else other then '/var/log/elasticsearch/'?

Unfortunately I can not start 7.6 cluster as I am using Elasticsearch with SugarCRM which supports 5.6 only.

Thanks for the help.

Kind Regards,

Unless we know exactly what happened, I don't think so.
But main question is : if you do care about your data, why do you have only one node in your cluster?

Could you share the full logs you have?

I guess that SugarCRM might support 5.6.16. I'd at least upgrade to this version.

Not sure why there is only one node in the cluster as I just jumped in to look into this. I will definitely discuss this point with the guys setup the server.
Here is the gist of this month logs for elasticsearch.

Kind Regards,

So the cluster has been stopped on 2020-04-06T07:08:59,099 and restarted a minute later 2020-04-06T07:10:22,592.

Caused by: org.apache.lucene.index.CorruptIndexException: codec footer mismatch (file truncated?): actual footer=1466921579 vs expected footer=-1071082520

This is important I think.

I'd read this: https://www.elastic.co/blog/red-elasticsearch-cluster-panic-no-longer

There is this tool but I'm not sure it exists in 5.x: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/shard-tool.html

May be @jpountz has another idea?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.