ECE and Shard Size adjustments on Cluster Re-Size events

Has anyone addressed Auto-Scaling the Shards per index, currently via templates, when ECE expands the cluster's nodes ?

Today, we have 24 Data Nodes in our non-ECE based cluster.

  • Via our Templates, we have set
    Per index, 11 Primary and 1 Replica shards per index, (so totaling 22) and
    The total_shards_per_node set to 1 in the templates, so that we don’t get a busy node w/ multiple shards on it.

This allows us to take 2 nodes offline (or to have hit the high/low water marks) without having an impact.

The ECE ‘challenge’ is to be able to increase the number of shards per index as we expand the nodes.

While we are considering how Shard Size limitations coming will help with this, we could still end up with a full node or more likely a Busy node when we have everything going to a single node.

Interesting use case - ECE doesn't add any more (or less) support than ES currently

(the only possible benefit is being able to infer the expected number of nodes per availability zone from an ECE API call instead of from a slightly "messier" ES call for automation purposes, but it's obviously a marginal improvement. In a similar vein it makes it easier to listen for cluster change events to know that it's happened so you can take whatever steps are needed)

Alex

This is where having ES' base template (and inheritability) would come in.
By having only critical settings related to the nodes at the root+1 level, we could update it and roll it out to the cluster on a resize event.

But, it does seem like the 'check', and then 'update' is something best devised by the vendor.
An ILM feature for the Hot Nodes perhaps.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.