Unassigned shards after adding Shard Allocation Awareness

I have an elastic cluster with 4 nodes running elasticsearch 2.3.3 with 3TB data and I use Kopf plugin as cluster "dashboard".
I changed my cluster configuration by adding Shard Allocation Awareness, so I added the following lines in elasticsearch.yml on my nodes :

Nodes 1 and 2:

node.rack_id: 01
cluster.routing.allocation.awareness.attributes: rack_id

Nodes 3 and 4:

node.rack_id: 02
cluster.routing.allocation.awareness.attributes: rack_id

Then I have restarted elasticsearch service on each node one by one bit I still have 133 unassigned shards on my cluster. First I had around 1000 shards which has been reallocated and the decease stoped at 133 shards. It's seems that the process is running and still trying to reallocate theses shards.

I alredy tried to disable and enable routing.allocation but the issue is still present.

PUT /_cluster/settings
{
"transient" : {
"cluster.routing.allocation.enable" : "none"
}
}
PUT /_cluster/settings
{
"transient" : {
"cluster.routing.allocation.enable" : "all"
}
}

Do you have any idea how can I fix that ?

Do you have any indices with more then 1 replica configured?

Nope, all indices with unassigned shards have 1 replica configured ("number_of_replicas": "1").

I have already tried that a few months ago so I know if I remove Shard Allocation Awareness configuration and restart elasticsearch service in all nodes, all shards will be allocated. But know I really need this configuration to ensure that all replicas will only be on nodes that are in the same rack.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.