I have recently deployed a small baseline installation with 3.5TB of storage, 128 GB of ram and 16 CPU.
Is there any best practice documentation on how I should create a deployment that shares equal resources across the three machines? How many GB of ram per node should be allocated for the data and kibana configuration and why?
Unfortunately the answer is "it depends a bit on your use case" - what your access patterns are like (eg read heavy/write heavy), total volume of data, what IO you have, what your reliability requirements are etc
A somewhat "vanilla" 3-zone installation might look like:
2 zones of 2GB Kibana
2 ES clusters:
3 zones of 4GB for monitoring
3 zones of 1x 4GB master only, 3x 32GB data/ingest
But to get the most of your cluster, it's better to figure out what it should look like first via support or the ES forums and then back into what the ECE config should look like
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.