Recommendations on new setup

Hello,

I am building new ES cluster using free ES offering.

I have three physical servers with 1TB each , expected log volume is 20-30 million per hour.

I am looking for best practice on how to setup the cluster and take advantage of 3TB memory .

Does it make sense to run multiple docker instances per each host to take advantage of the memory ?

Thanks

Yes! if you are saying you have 1TB RAM ... Yes absolutely you should run multiple docker instances / data nodes on that single host. Of course you have said nothing about the number of CPUs and The Underlying Storage / Network performance is always something to think about.

That much RAM is probably a bit overkill...

OK there are lots of variables etc... and this is back of the envelop stuff, depends on the sizing and complexity of you events, how much parsing etc.. etc..

Think of each docker instance / data node with 64GB RAM that is a good working unit... assuming that you can get 4 to 8 vCPU with that 64GB RAM

So if you have 30M / Events per hour
That is about 8.3K / Events per Second

If those events are ~500b per event then you are 4-5MB/sec And about 350GB / Day
All reasonable numbers.

You will probably have to have more that 1 primary shard so you get some parallelism on ingest.

I might setup probably set up

Beefy / Strong / Very Reslient

3 Masters 1 on each Host (Masters can be smaller both RAM and CPU)
6 Data Nodes of 64GB and 8 CPU ... 2 on each Host.
If you have 6 data nodes you could set Primary Shards to 3 + 1 Replica and probably get solid performance

Minimal
you could probably get away with 3 x 64GB 8 CPU Master + Data Nodes 1 on each host as a minimum config then add data nodes as needed.... once you get to 6 or so data nodes dedicated masters are a good idea

Again assuming good fast disk, I/O network etc and this is all back of the envelop

Also have not factored in the retention you want. You know each of those 64 GB nodes could have anywhere from 2 TB to 10 TB of fast storage. So depending on how long you want to keep the data then how many nodes etc... Lots to figure in Do all the math and that's your starting point.

There are lots of docs etc

Thanks for quick response. I have 48 cpu cores , 15TB usable SSD disks, and 10-GB network on each host.

The logs are coming from 1500 sources mix of OS , application and network devices logs.

Is there example of docker files that have closer config to this. ?

Thanks

Not my area of expertise. Maybe someone else.

And you want persistent volume..also not my area expertise but you want you know mount the host file systems to docker, so if you lose the container, you don't lose the data

I would definitely read our documentation in detail

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.