I'm deploying ElasticSearch on a cluster with different node sizes, some
have 32GB memory, and some have 16GB. I hope more shards will be allocated
on nodes with bigger memory.
I googled a bit, there're some settings that can exclude some indices from
some nodes. But it's not very convenient. So I'm wondering whether there's
a 'weight' setting for individual node, or ES has already been allocating
shards based on node memory size?
I'm deploying Elasticsearch on a cluster with different node sizes, some
have 32GB memory, and some have 16GB. I hope more shards will be allocated
on nodes with bigger memory.
I googled a bit, there're some settings that can exclude some indices from
some nodes. But it's not very convenient. So I'm wondering whether there's
a 'weight' setting for individual node, or ES has already been allocating
shards based on node memory size?
Thanks.
Nope. I asked for it a few years ago but its never been a high enough
priority. We don't have weights on the indexes either.
Your best bet is to pin the any heavier shards to machine with more ram via
a tag on all the machines. The shards will still be able to move between
those nodes just fine.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.