ELK data/shard allocation is not happening properly

Hello Team,

We have 6 node ELK cluster and all of them are configured to act as master and data node. Out of 6 nodes two nodes are consuming more disk space which is at 93%, other nodes are at 70% to 80%. We have tried configuring cluster.shard.route.rebalancing but no luck. So basically disk/shard allocation is not happening properly. We are using ELK 7.8.1 version.

Let me know if anyone has any suggestions.


Can you share the output from _cat/allocation?v?

Here you go,

shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
508 8.6tb 8.6tb 1.7tb 10.3tb 83 ap0es1-4.m0.sysint.local ap0es1-4.m0.sysint.local
508 5.4tb 5.4tb 4.8tb 10.3tb 52 ap0es1-5.m0.sysint.local ap0es1-5.m0.sysint.local
509 5.9tb 5.9tb 4.4tb 10.3tb 57 ap0es1-0.m0.sysint.local ap0es1-0.m0.sysint.local
509 8.1tb 8.1tb 2.2tb 10.3tb 78 ap0es1-2.m0.sysint.local ap0es1-2.m0.sysint.local
469 9.5tb 9.5tb 826.8gb 10.3tb 92 ap0es1-1.m0.sysint.local ap0es1-1.m0.sysint.local
509 8.1tb 8.1tb 2.2tb 10.3tb 78 ap0es1-3.m0.sysint.local ap0es1-3.m0.sysint.local

some closed index are there in ap0es1-1/1-4 nodes, I just want to move them to the other nodes, but not getting proper command to do it.

You cannot just move closed indices, you need to reopen them, move and then close again.

What is the output from the _cluster/stats?pretty&human API as well please?

ok I can't just move the closed index I got that, but how to rebalance the shards/index, looks like most of the newly created index are going to ap0es1-1 and ap0es1-4, it is not just loadbalancing properly between the clusters.

Elasticsearch balances by the shard count. It will move shards off nodes that cross the watermarks.

So what you are seeing is probably not unexpected and shouldn't be an issue.

can you give me the command to move the index from one node to other node within the cluster?

You can try Cluster reroute API | Elasticsearch Guide [7.14] | Elastic

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.