Can a Cluster Reside on more than 1 Allocator?

I currently have a cluster which sits on one allocator. As data grows over time, is it possible for this cluster to be scaled out horizontally over a few allocators?

Certainly!
Firstly, based on the number of availability zones (ECE requires 3 AZs to support HA setup) you have configured nodes will be deployed in allocators across different AZs. This way your cluster can tolerate failure of 1 or 2 AZ, depending on the number of AZs you configured (2 or 3) and the number of replicas (1 or 2 replicas).

Furthermore, the default allocation strategy (in ECE 2.1 or above) is "fill first anti-affinity", meaning that ECE will attempt to place nodes from the same cluster on different allocators in the same AZ when possible. This means that in case of an allocator failure it will reduce the chances that all nodes from a single clusters in that AZ will be affects.

Hope this helps.

Thanks for the explanation. In the scenario where the disk starts to fill up as the data grows, will ECE automatically horizontally scale up the cluster from 1 to 2 allocators or does the ECE administrator need to scale up manually?

Auto-scaling is not yet supported. You can leverage ES watermarks in order to ensure running out of disk space will not affect a node's availability.
As for Allocators, you can also monitor and alert on Allocator disk usage using the metric we store in the logging and metrics cluster. This is something we provide out of the box using the beats side car to collect those metrics.

For manual scaling, does it mean that the administrator increases the cluster's RAM (and hence, disk storage) and ECE will automatically scale out to another allocator if there are not enough resources on the current allocator?

The steps I am referring to are at: Resize your Deployment

Yes, exactly. If there isn't enough capacity the node will be placed on a different allocator, always applying the AZ configuration.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.