hi guys,
I'm trying to explore this new functionality to understand if it could help me to decrease infrastructure costs (against bare-metal or virtual machines).
I saw that I can install ECE on premise, so I can install it on a single virtual machine (High availability concept will be ignored in my demo environment).
After that, Will I be able to deploy Elasticsearch instances on AWS?
Currently ECE is "single region" - meaning that (approximately) all the hosts in an ECE deployment need "LAN-like" access to each other
So while it's possible using cloud-VPN technologies, it's not trivial
One thing we are currently working on actively is support for x-ECE CCR and CCS (no ETA yet, but "coming soon").
This would get one big step closer to hybrid infrastructure - not ideal because there would be two separate UI/API sets for ECE "control plane management", but once that was set up one UI/API per ES cluster-set spanning on-prem and hybrid.
(Longer term there is working ongoing to add x-region support to the ECE control plan. But that is significantly further down the road than x-region CCS/CCR)
Alex
so, basically, if I install ECE on AWS I must have my deployment in AWS world. in the same way, if I install it on premise, I must have everything on premise. is it correct?
That is correct (unless possibly using something like AWS DirectLink)
perfect.
another doubt (if I can ask): taking a look to the small installation scenario, I should have 3 different EC2 with at least 128 GB of RAM.
- the admin console "Cloud UI" (ref. ECE infrastructure image) will be available on each host, right?
- 3 hosts with 128 GB of RAM each one is really a great number if I want to maintain low costs at the beginning (think about a small project that want to try to start but could not have a great budget at the beginning). Am I thinking correctly or am I considering something wrong?
sorry for my trivial questions
The 128GB is just to match the total amount of Elasticsearch (and other stack services) RAM desired
The amount of RAM required by all of the ECE services themselves is around 15GB (I think we've shrunk it down further, but I don't remember the exact amount - it wasn't much lower)
So if say want to have 3 zones of say 2 ES clusters of 16GB each, a monitoring cluster of 2GB, 3 Kibanas of 1GB each then you'd want 32+2+3 = 37GB of RAM capacity, + 15GB of system clusters and services per zone .. so you'd end up with each of the 3 ECE hosts needing to be in the 52GB range, so say 64GB with some headroom
If you only needed your clusters to be 2-zone then you'd deploy 3 ECE hosts (has to be >=3 for HA) of say 42GB instead (i think licenses are per 64GB) and you could probably squeeze them all in
Does that sort of process make sense in the context of what you're planning?
ETA oh forgot:
- the admin console "Cloud UI" (ref. ECE infrastructure image) will be available on each host, right?
correct, though it doesn't have to be - the only components that needs to have duplicates on all 3 nodes in the "all-in-one" architecture are the "director" (Zookeeper), runner and allocator - the API/UI and proxy can live on 2/3, and the ES clusters can be "2 zone" (which is actually 3 zone, but one of the zones just gets a 1GB master only node).
Does that sort of process make sense in the context of what you're planning?
honestly not. I'm really confused.
if I want just 1 ES cluster (16 gb) and 3 Kibana (1 gb) spread on 3 zones, I should have:
3 AWS instances (EC2);
3 ECE hosts (one on each EC2 instance);
then:
EC2 #1: ECE (15gb) + kibana (1gb) + ES cluster (16gb) = 32 gb
EC2 #2: ECE (15gb) + kibana (1gb) = 16 gb
EC2 #3: ECE (15gb) + kibana (1gb) = 16 gb
are these numbers correct?
furthermore,
The amount of RAM required by all of the ECE services themselves is around 15GB
checking the JVM resources list in the example here, ECE services should require 26 GB; can you explain to me this difference?
ECE services should require 26 GB; can you explain to me this difference?
Those numbers are very safe/conservative - if you filled up 3x 128GB with lots of small ES clusters and were very aggressive in the load you placed on it, you might get close to needing that much.
(In fact my 15GB/zone is even more aggressive than it sounds because it includes 2 system ES clusters, one for AC searches and one for logging and metrics)
if I want just 1 ES cluster (16 gb) and 3 Kibana (1 gb) spread on 3 zones
We wouldn't normally recommend running 3 zones of Kibanas and 1 zone of ES - if you don't care about the HA-ness of your one cluster, might as well just run ECE as a non-HA 1x64 host (or just run ES directly from the docker images and not worry about the ECE infra at all)
Probably a minimum point where it's worth the overhead of ECE is something like
EC2 #1 48GB: ECE (15GB) + ES data nodes (+ misc other stack services)
EC2 #2 48GB: ECE (15GB) + ES data nodes (+ misc other stack services)
EC2 #3 32GB: ECE (15GB) + ES master eligible nodes (+ misc other stack services)
Those numbers are very safe/conservative - if you filled up 3x 128GB with lots of small ES clusters and were very aggressive in the load you placed on it, you might get close to needing that much.
great. so I have to know that in a safe scenario I should follow official numbers, otherwise I can also decrease them (just a little bit )
We wouldn't normally recommend running 3 zones of Kibanas and 1 zone of ES - if you don't care about the HA-ness of your one cluster, might as well just run ECE as a non-HA 1x64 host (or just run ES directly from the docker images and not worry about the ECE infra at all)
understood.
just check this last configuration:
EC2 #1 64GB: ECE (15GB) + ES_firstProject data nodes + ES_secondProject data nodes (+ misc other stack services)
EC2 #2 64GB: ECE (15GB) + ES_firstProject data nodes + ES_secondProject data nodes (+ misc other stack services)
EC2 #3 32GB: ECE (15GB) + ES_firstProject master eligible nodes + ES_secondProject data nodes (+ misc other stack services)
where ES_firstProject
and ES_secondProject
are 2 different ES searches oriented cluster, so 16 gb; if they are logs oriented I can decrease them to 4/8 gb.
should be acceptable right?
Yep looks sensible from a technical perspective
Alex
@Alex_Piggott really thanks for this type of support
Now I have just to start to play with it.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.