Allocator information


(Phr0gz) #1

Hello, we are currently using a simple hardware architecture: 1:VM = 1:Node and my company is looking about the best subscription program. And of course ECE is on the table.

I've few questions despite the available documentation:

On each of our VM Is it possible to install a docker container and move the ES node inside (whatever if ES is installed from scratch or not). Because the documentation the allocator memory size is recommended between 128-256 GB, so I guess it is designed to host multiple ES nodes. But the idea here is to migrate without change the VM and with a 1:1 ratio (1 VM = 1 Container = 1 Node).

If not does it mean we will need dedicated bare-metal servers?

Our current infra is not very small (approx 200 vcpu, 700 GB Ram, > 50TB disk) so it will represent a lot of changes...and Finally I'm not sure it will be suitable for companies that are already using dedicated load-balancers, Virtualisation farms, and SAN equipment...


(Alex Piggott) #2

From the description of your environment ECE may not fit that well

There is no issue with running an ECE host inside a VM (assuming it has sufficient resources, in particular I/O - but of course ECE's restrictions are the same as ES's here)

That said if you have a larger number of smaller VMs that you provision dynamically(?) as you provision new clusters. with a 1:1 relationship, then that doesn't match up well with ECE - where we dynamically provision containers within larger hosts (which can be VMs or bare metal).

The 1:1 relationship is not an issue per se, but there is some overhead per "allocator host" (eg 8GB or so). So for example you'd have a 72GB VM advertizing 64GB available to ECE, and then when you created a cluster including a 64GB host it would get provisioned there. (or 40GB vs 32GB).

In general, the more dynamic your VM provisioning ,the less suitable ECE is.

On the final point, SAN and dedicated load balancers is a pretty standard configuration, and we certainly run virtualization farms too ... though typically larger machines that are more static, as discussed above.

Alex


(Phr0gz) #3

Thanks for the reply!
It's not a dynamic infra, but more "static":The number of servers can increase sometimes but will not decrease.

So if I have two types of VM used for ES: 32GB or 40GB RAM. To prepare a 1:1 migration (1 VM : 1 Node) with ECE we will need to add 8 GB of RAM (for allocator) on each VM. Right?

I'm aware that is not the best solution, but it can be our only way to have the benefit of the support included with the ECE subscription. And it will give us some times to prepare 3-4 bare-metal servers (dedicated for ES) without hypervisor... or something else.

Ludovic


(Alex Piggott) #4

So if I have two types of VM used for ES: 32GB or 40GB RAM. To prepare a 1:1 migration (1 VM : 1 Node) with ECE we will need to add 8 GB of RAM (for allocator) on each VM. Right?

You might be able to squeeze it in a bit less for pure allocators ... eg looking at https://www.elastic.co/guide/en/cloud-enterprise/current/ece-topology-example1.html, you could probably have 1GB for runner and 1GB for allocator (the other services listed are "control plane" so would run in separate dedicated VMs not hosting clusters - eg 16GB would be a realistic size for those). So 2GB would be a pretty scary minimum, but 4GB of overhead per allocator VM is realistic.

You can get the same support and Stack features without an ECE subscription, so if you're thinking of moving your deployment architecture over time toward a more ECE compatible one, I think the decision comes down to something like:

  • There will be some overhead/hassle in setting up compared to running directly in the VM, which would be counterbalanced in part by the simplicity in managing the clusters (which varies from case to case)
  • You would be able to migrate the clusters from VM to (eg) bare metal more easily if it all started in ECE - depending on how painful a service interruption is, this might be a consideration

Alex


(Phr0gz) #5

Thanks, very helpful !!


(system) #6

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.