All Shards failed to assign

Hi,
I tried restoring EBS of Elastic search nodes from one cluster to another but after restoring the EBS nodes are running in EC2 but cluster health shows red.

GET http://hostname/_cluster/health?pretty
{
"cluster_name" : "clusterName1",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 8,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 270,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 0.0
}

Please help me on this. Tried even restarting all nodes and client instance but still the same.

Thanks,
Pradeep

Which version are you using?

Hi @Christian_Dahlqvist
ES version we are using old , it's "5.6.5".

Regards,
Pradeep

Have you checked the health for red shards? You can do so by hitting hit:
http://<yourhost>:xxxx/_cluster/health/?level=shards.

@lhmzhou
Yes i did that, this is the response i am getting..
{"cluster_name":"cluster name","status":"red","timed_out":false,"number_of_nodes":8,"number_of_data_nodes":3,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":270,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":0.0,"indices":{"avoxclient-v003-2019.02.27":{"status":"red","number_of_shards":5,"number_of_replicas":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":10,"shards":{"0":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"1":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"2":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"3":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"4":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2}}},"logstash-2019.02.21":{"status":"red","number_of_shards":5,"number_of_replicas":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":10,"shards":{"0":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"1":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"2":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"3":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"4":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2}}},"logstash-2019.02.20":{"status":"red","number_of_shards":5,"number_of_replicas":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":10,"shards":{"0":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"1":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"2":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"3":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"4":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2}}},".elastichq":{"status":"red","number_of_shards":5,"number_of_replicas":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":10,"shards":{"0":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"1":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"2":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"3":{"status":"red","primary_active":false,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2},"4":

Restoring from file system snapshot is not supported. You are also using a quite old version, so I am not sure I can help.

Hi,

If restore file system not supported what are the options to restore? I tried this since nested query was failing in one environment but other environment it's working fine. I thought by restoring EBS from other working environment would fix this problem.

What exactly is the history here? You copied over some EBS for some of the nodes from another cluster? But not all of them? I would think a complete restore of all disks of all VMs would work (even if not supported, I'd like to hear why that wouldn't work, as we often snapshot our EBS disks before an upgrade, etc.)

I think I have seen it work for older versions but do not know the steps or conditions required for it to work. Restoring as a backup for the same node without changing IP/host name is different from setting up a new cluster elsewhere based on the snapshots. For version 7.x I believe additional checks have been put in place that prevents it from working. You should instead use the snapshot and restore APIs.

1 Like

Hi @Steve_Mushero
Yes i coped some EBS for some of the nodes from another cluster not all. Do you think it works if i copy all?
I have one more question where this schema will be stored in cluster? is it in nodes or in client/master ?

Thanks,
Pradeep

As Christian mentioned, this is quite hard to make work - in theory if you copy all the VMs and their disks, and if the IPs are the same, it might work; certainly not possible if you don't copy all the nodes - but why not just snapshot the system and restore it? The approved method :wink:

Hi @Steve_Mushero How to just restore snapshot? I am not sure on this, can you please guide me.

Docs like these:

https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshots-restore-snapshot.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html

Thanks for the link. I understand we have to use Create snapshot API
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
to create snapshot but this API i have to execute in master / client or any specific node? My kibana is not running so i have to try curl command but not sure where this API i have to execute.

All curl commands can go anywhere as far as I know they are all cluster commands (even though some are directed at a node or index) ; not to the master, so just point to any node and run them - note wait for completion will often timeout in Kibana, but maybe not in curl; you can make that false and just check status, too.

Thanks @Steve_Mushero
Is there any steps / protocol we have to follow to restart ES cluster? any order we have to maintain like master nodes, client nodes and data nodes? .

Well, what do you mean by 'restart' a cluster? Do you mean a full restart from scratch, or a rolling restart such as for upgrades, where you restart a node at a time.

Or do you mean restart with a new cluster to restore from a snapshot? For that, I'm not sure, would be nice if there was a doc on that but I'd think you'd have to stop all nodes, delete/purge the ES data directories, and start the nodes which would come up in a fresh new cluster with no data, then you'd load the snapshot with or without cluster state - but I've never done it.

There are some examples around, but mostly about how to dump the snapshot:

Thanks @Steve_Mushero
I have issue in my index mapping mapping and i found the right index mapping to fix this. When i tried to update index mapping into my Elastic search but i am getting "Validation Failed: 1: mapping type is missing;

I am using this PUT API to update index mapping:
http://dev.com/my_index/_mapping

Request json i am using the response i recieved from
GET http://dev.com/my_index/_mapping

May be my request is not correct please let me know how to construct the right request for this update index mapping API and i am using ES version 5.6.5

Thanks,
Pradeep

You have to remove some {} if I recall to get a map back in (at least for a template) - suggest easier to use Kibana if you can. This sort of management problem is why we built a product for this :wink:

That's not working for me even kibana too. I am trying PUT api in postman. When i checked this error it's pretty common but with my ES version updating schema every document we have to it seems.
My question is it easy to delete and re-create schema rather than updating schema? i know we will loose index data but i have re-index job so i can bring bak my data. Please suggest which is better approach and i appreciate if you can share some examples.

Thanks.
Pradeep