I am facing the following conditions I need set up an Elasticsearch Cluster and may recalculate the hardware resources. Even if some people won't like...
512GB RAM, 26 CPU Cores, traditional text-based logs around 3,5TB in 7 Days. Fast queries only till 3 days in the past.
The Setup I worked put so far looks like this:
- 6 machines each 64 GB RAM(31 GB allocated by elasticsearch) and 4 Cores CPU for Data Nodes(all master eligible).
- 1 machine 16GB RAM(8 GB allocated by elasticsearch). and 2 Cores for one "only Master" Node, Kibana, and Logstash
- 8 TB of disk space in total
- one active index rolled over when reached 200GB to a new one with 6 shards on it. makes around 35GB per shard
- shrinking to one shard after 3 days to get some resources back
- forcemerge after 5 days to reduce disk space
- deleting after 7 days
1.) is the index slicing ok?
2.) do I need more Memory cause I read some best practices saying you need as much memory as total indexed data size?
3.) do I need more CPUs on the nodes especially the master cause kibana and logstash are running there?
4.) are 8 TB of total diskspace enough?