Allocate Shards from status unassigned 5.6.2

I am working up a system with elasticsearch 5.6.2, I did a rolling update to elasticsearch 6.1.3 but something went wrong with the existing applications, so i reverted back to v5.6.2. The Issue I am facing is, my cluster has formed but the status is RED and when I check the status of any shard from any index, below is the json:

{
   "state": "UNASSIGNED",
   "primary": true,
   "node": null,
   "relocating_node": null,
   "shard": 1,
   "index": "pu_reviews_prod_db",
   "recovery_source": {
         "type": "EXISTING_STORE"
    },
   "unassigned_info": {
        "reason": "CLUSTER_RECOVERED",
        "at": "2018-02-03T19:44:53.350Z",
        "delayed": false,
        "allocation_status": "no_valid_shard_copy"
   }
}

How do i bring back the data? I have 10 indices, all shards for each of them have the same response (with their own index names).

I was trying to manually allocate using the below query:

POST http:// serverIP /_cluster/reroute

{
   "commands": [
    {
       "allocate": {
        "index": "pu_reviews_prod_db",
        "shard": "0",
        "node": "ates-data-01",
        "allow_primary": 1
      }
    }
  ]
}

but it gives me a error response Unknown AllocationCommand [allocate]

How can I bring it back up?

You can't revert once you upgraded. Probably you'd need to restore your last backup

@dadoonet do you mean restore old system snapshots?

@dadoonet I placed my old data node into the cluster and pulled out the new data node. but still the shards remain unassigned. can this be an issue of the segments.gen file? if so, where can I find it?

It's strange since, _cluster/allocation/explain returns:

{
    "index": "pu_reviews_prod_db",
    "shard": 2,
    "primary": true,
    "current_state": "unassigned",
    "unassigned_info": {
        "reason": "CLUSTER_RECOVERED",
        "at": "2018-02-03T21:37:38.066Z",
        "last_allocation_status": "no_valid_shard_copy"
    },
    "can_allocate": "no_valid_shard_copy",
    "allocate_explanation": "cannot allocate because all found copies of the shard are either stale or corrupt",
    "node_allocation_decisions": [
        {
            "node_id": "WtN6PQV_SI6CQPe2xOdk6Q",
            "node_name": "ates-data-01",
            "transport_address": "99.0.xx.yy:9300",
            "node_decision": "no",
            "store": {
                "in_sync": false,
                "allocation_id": "MglwwWL0QHeRCuMFZVCDKQ"
            }
        }
    ]
} 

and the old data node is exactly the way it was and the master nodes and client nodes are both v5.6.2 as before.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.