Is it possible to balance shard with size in consideration?

Is this possible?
I know the default is simply shard count, which is important.
But not all shards have the same size.
Is it possible to configure size as a secondary consideration?
Meaning pick a larger (or smaller) size shard when moving between nodes based on the capacity of the 2 nodes.
There might be slightly more overhead but It will help balancing storage size across cluster much faster when adding new nodes.

Elasticsearch does take account of the capacity of each node when relocating shards, ensuring that it doesn't start a relocation that would breach a disk watermark.

The most time-consuming bit of rebalancing a cluster is moving the data onto the new node. This means it doesn't really matter if it moves a single large shard or two shards each half the size, since the amount of data (and therefore the time it takes) is the same.

Thanks for the reply.
I don't mean the capacity of the node itself.
I mean moving larger shard instead of smaller shard if the current node is using more capacity.
It's more of a relative decision. Breaching watermark is not a concern for me.
I add nodes when capacity is > 80%.

Looking at overall node capacity is fine. I am looking for a way to naturally balance shards across my cluster.
I have many indices that don't have the same capacity. If shard relocation can have this secondary order, it will help a lot.

Once the target node has been selected, there are many candidate shards the algorithm can pick. If say shard A is 1mb and shard B is 1GB, I'm looking for a way to "encourage" shard selection to pick B if target node has more space, etc.

The reason I'm asking for this is that disk usage for my newest node is 33%, where as all other nodes are around 45% (2gb disk each node). That node has been added more than 1 month ago.

Perhaps I'm missing something. Can you explain why this is a problem?

The intend of this post is not asking for a solution to an issue.
I am asking for a potential setting/configuration that I might have overlooked.

A new node after 1 month still not balanced in term of storage suggests that size is not in the relocation consideration, hence this question.
Is there a knob to take shard size as a secondary criterion?

Anomalies usually suggest something is off, which is why I am handling it early to prevent potential issues in the future.

I like that ES moves shards around to balance across cluster. As far as I know, it is purely shard count by default. It is hands free most of the time. But when the capacity gets high (> 80%), that much delta becomes an issue. It could mean that I need to add nodes few weeks earlier. Little money wasted like this do add up eventually over time.

I see. No, the situation you describe is well within expectations. Elasticsearch aims to keep all nodes below the high watermark (defaults to 90%) and mostly achieves this without needing to move any shards thanks to the low watermark (defaults to 85%). If nodes are below the low watermark then their disk usage is basically ignored. It sounds like you have ample capacity: based on your disk usage numbers you might even have too many nodes right now.

Most of our indices are monthly; therefore, we backup and delete older month indices.
The capacity fills up quickly. I just performed the backup delete step, hence low capacity today.

Is it recommended to fill up storage > 80%? I assume it's a function of OS file system performance. AFAIK, free capacity < 20% usually cause OS degradation.
I assume the watermark is for fail safe protection.

I also think that an overfilled filesystem would perform worse, but you'll have to do your own experiments to find out exactly where the sweet spot is for your system. 90% is the default but if you'd rather Elasticsearch automatically kept its disks below 80% then by all means reduce the high watermark to 80%.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.