I have a cluster with the following specs (es v 1.5.2):
3 nodes with RAM: 32GB, CPU cores: 8 each
63 total indices = 32 marvel + 1 kibana + 30 data
366 total shards = (32 marvel + 1 kibana + 150 data)* 1 replica
959,231,444 total docs
588.38GB total data
ES_HEAP_SIZE=16g
I have successfully deleted around 200 empty indices and restarted the cluster (didnt disable shard allocation)- normally the allocation take 1 hour to finish but now it is over 12 hours and still I have 183/366 unassigned shared (half!).
Also, I can see that node1 have only 6 shards allocated to it- most of the data shards is separated between node2 and node3. I tried to restart node1 and still got the same situation. why isn't it taking more shards to it? And why the allocation is stuck?