595 unassigned shards

Hi, I walked in this morning to 595 unassigned shards.

What is best practice for resolving without data loss?

Depends, are they primaries or replicas?

Okay, the problem has mutated.

Now down to 30 unassigned shards with 2 constantly relocating shards.

My question is: why are these two relocating shards taking forever? Could there be a different issue?

elasticsearch.yml on each node is:

node.name: "elksrv1"
node.master: true
node.data: true
index.number_of_replicas: 2
index.number_of_shards: 5
cluster.routing.allocation.node_concurrent_recoveries: 2
indices.recovery.max_bytes_per_sec: 20mb
indices.recovery.compress: false
discovery.zen.minimum_master_nodes: 3
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["elksrv1","elksrv2","elksrv3","elksrv4","elksrv5"]
script.engine.groovy.inline.update: on
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
threadpool.search.queue_size: 10000
cluster.routing.allocation.disk.threshold_enabled: false
cluster.routing.allocation.enable: all

pretty is below:

{
"cluster_name" : "elasticsearchlogstashkibana",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 6,
"number_of_data_nodes" : 5,
"active_primary_shards" : 1190,
"active_shards" : 3531,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 30,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}

All fixed up. I wrote a script to relocate shards and it fixed the issue.