Shard Level Allocation Filtering

Hey, I'm thinking of working on individual shard level allocation filtering settings of a particular index rather than the entire index's shard allocation filtering.

Basically, I want to do manual shard level allocation to the individual set of nodes. to place the shards according to my heuristics

Any pointers on how to work on plugin for this?

Take a look at the ClusterPlugin interface.

1 Like

thanks a lot, was able to code this up using the ClusterPlugin by injecting my own allocation decider.

now I'm able to control the placement of shard's and their replicas by using a dynamic index scoped setting for my plugin.

thanks a lot @spinscale

glad you got it working. Out of curiosity (and if you can talk about it), what kind of allocation logic are you using? What is so special about it that cannot be covered with the existing features?

I was exploring this to control the placement of the indices. What we noticed from our workload is that hot nodes where most of the indexing happens for us, the shard placement is highly unbalanced. because at the start of the creation of indices, they are placed randomly and a few hot shards(with high index rate compared to others) can end up on same node and this created 100% cpu on some nodes while 25% cpu on the others.

I wanted to control the placement of the shards through the settings so that based on the yesterday's usage, I can control the placement of the indices accordingly

Thanks

Can't you use the total_shards_per_node index setting to force a more even distribution for your indices?

no, for example:

I have 3 nodes and default number of shards of index as 2, and 3 hot indices with index rate 100qps and 3 inactive indices with index rate of 1qps. all of the indices are created around 00:00 UTC . i.e when the day changes and the indices get a new date in their field.

now shards can be placed in this way

possible indices distribution
h1s1 => hot index 1, shard 1
c3s2 => cold index 3, shard 2

node 1: h1s1, h2s2, h3s2, c2s2
node 2: h1s2, h3s1, c1s2, c3s1
node 3: h2s1, c1s1, c2s1, c3s2

now as you can see, the node 3 got 3 cold indices and node 1 got 3 hot indices. due to this there is a huge cpu imbalance in our nodes which do the indexing of today's data.

because of this I made a plugin to control the placement of the shards of indices based on the yesterday's data by creating the templates for tomorrow's data beforehand. currently I'm using an approximation dp algorithm to figure out the placement. since the complexity is turning out to m**n (m => number of nodes, n => number of shards)

If you set the indices up with 3 primary shards I believe the parameters I linked to should give each index an even distribution.

Hey Christian, that was an example, often I control the number of shards based on the data I have. I think number of shards per node setting to 1 will always work when the number of nodes and number of shards of an index are equal.

But think of a case where my cluster size is of 60 nodes. now to create balanced configuration. I have to set the number of shards for all indices to 60. but as I have explained before number of shards best to be decided based on the amount of data.

If you have s much larger number of nodes you should be able to set the maximum number of shards per node for indices with low number of primary shards. This will still spread one shard per node although not cover all nodes. You might also consolidate smaller indices in order to be able to have a larger number of shards without resulting in very small shards. I would expect the default distribution pattern for a much larger cluster to differ a lot compared to a small one though, so would expect you to be less likely to see extreme differences in load. How many nodes do you have now that you are seeing this problem?

If you want to create and maintain a custom plugin, feel free to go ahead and do so. I am just trying to save you the effort if the need is not imminent.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.