If I had a cluster with varying amounts of RAM available, e.g. 4, 8,
16GB available for ES to use via ES_MIN_MEM/ES_MAX_MEM, will the
cluster know how to balance the shards correctly according the the
resources available on each node?
If I had a cluster with varying amounts of RAM available, e.g. 4, 8,
16GB available for ES to use via ES_MIN_MEM/ES_MAX_MEM, will the
cluster know how to balance the shards correctly according the the
resources available on each node?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.