How to prevent shards of new index from assigning on only a few nodes and spread it?

When a new daily index is created, shards tend to be allocated only to a few nodes with sufficient capacity.

It occurs rejected operation because these nodes take every incoming request.

I want to prevent shards from being concentrated in only a few nodes. (Including relocating disks by relocating existing shards)

  • The cluster has a few nodes limited by a watermark and others with enough free space.

Also are there any APIs to rebalance shards across existing indexes?

What is the output from _cat/allocation?v?

shards disk.indices disk.used disk.avail disk.total disk.percent host           ip             node
86        1.1tb     1.1tb    309.5gb      1.4tb           79 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    162.5gb      1.4tb           89 0.0.0.0  0.0.0.0
86        1.1tb     1.2tb    259.9gb      1.4tb           82 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    257.1gb      1.4tb           82 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    246.9gb      1.4tb           83 0.0.0.0  0.0.0.0
86      674.9gb   684.9gb    804.7gb      1.4tb           45 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    159.7gb      1.4tb           89 0.0.0.0  0.0.0.0
86     1023.4gb       1tb      457gb      1.4tb           69 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    260.9gb      1.4tb           82 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    184.5gb      1.4tb           87 0.0.0.0  0.0.0.0
87        1.1tb     1.1tb    279.2gb      1.4tb           81 0.0.0.0  0.0.0.0
86      990.9gb   997.8gb    491.8gb      1.4tb           66 0.0.0.0  0.0.0.0
86        985gb   992.5gb    497.1gb      1.4tb           66 0.0.0.0  0.0.0.0
85        1.2tb     1.2tb    184.1gb      1.4tb           87 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    281.4gb      1.4tb           81 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    271.4gb      1.4tb           81 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    227.4gb      1.4tb           84 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    269.9gb      1.4tb           81 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    284.4gb      1.4tb           80 0.0.0.0  0.0.0.0
87        1.1tb     1.1tb    311.6gb      1.4tb           79 0.0.0.0  0.0.0.0
85      960.9gb   967.3gb    522.3gb      1.4tb           64 0.0.0.0  0.0.0.0
85        1.2tb     1.2tb    181.2gb      1.4tb           87 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb      290gb      1.4tb           80 0.0.0.0  0.0.0.0
85      799.3gb   807.2gb    682.4gb      1.4tb           54 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    268.1gb      1.4tb           81 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb      230gb      1.4tb           84 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    206.4gb      1.4tb           86 0.0.0.0  0.0.0.0
85        1.2tb     1.2tb    219.4gb      1.4tb           85 0.0.0.0  0.0.0.0
85        1.1tb     1.1tb    330.9gb      1.4tb           77 0.0.0.0  0.0.0.0
86      974.5gb   983.3gb    506.3gb      1.4tb           66 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    212.7gb      1.4tb           85 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    190.3gb      1.4tb           87 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    313.7gb      1.4tb           78 0.0.0.0  0.0.0.0
86          1tb       1tb    363.5gb      1.4tb           75 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    295.7gb      1.4tb           80 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    232.9gb      1.4tb           84 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    243.9gb      1.4tb           83 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    234.9gb      1.4tb           84 0.0.0.0  0.0.0.0
86        1.2tb     1.2tb    176.4gb      1.4tb           88 0.0.0.0  0.0.0.0
85        1.2tb     1.2tb    244.3gb      1.4tb           83 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    291.9gb      1.4tb           80 0.0.0.0  0.0.0.0
85        1.1tb     1.2tb    259.6gb      1.4tb           82 0.0.0.0  0.0.0.0
86      895.1gb   901.1gb    588.4gb      1.4tb           60 0.0.0.0  0.0.0.0
85        1.1tb     1.1tb      285gb      1.4tb           80 0.0.0.0  0.0.0.0
85        1.2tb     1.2tb    206.2gb      1.4tb           86 0.0.0.0  0.0.0.0
86        1.1tb     1.1tb    272.4gb      1.4tb           81 0.0.0.0  0.0.0.0
85        1.2tb     1.2tb    210.1gb      1.4tb           85 0.0.0.0  0.0.0.0
85        1.2tb     1.2tb    183.2gb      1.4tb           87 0.0.0.0  0.0.0.0
85        1.1tb     1.1tb    266.2gb      1.4tb           82 0.0.0.0  0.0.0.0  
85        1.1tb     1.1tb    297.7gb      1.4tb           80 0.0.0.0  0.0.0.0 

I erase IP & hostname because it is my company info :slight_smile:

And I found a thread dealing with similar problem: How to rebalance primary shards on elastic cluster

Shard allocation is pretty even across the cluster, as it's based on the count of shards per node.

Are you using allocation awareness?

No. I'm not using allocation awareness yet.

The number of shards well balanced, but I want to balanced disk space also. Because some nodes limited by watermark(high: 90%), so new indexes are allocated to only a new nodes.

It looks like quite a few nodes are at or around default low watermark level (85% full) which could be affecting allocation. You may want to alter the watermark settings and/or free up some space to avoid this becoming an issue.

Yes. but I think that's not enough because it will reach the watermark again in a few days. Also my disk is physical SSD so i don't want alter watermark :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.