Hi,
I have a 3-node architecture with the following roles:
"roles" : [
"ingest",
"master",
"data",
"ml"
]
currently each server has 32GB of memory, which means that my nodes have 16GB for elastic. In the last few weeks I have been working with machine learning jobs, and due to the high cardinality of the fields the memory consumption of the jobs is high. My question is about to how much my cluster could grow in memory without affecting performance?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.