Hello,
On one of my ELS cluster, i have node with different hardware capacity.
1 node : 8 GB RAM and 200GB disk
1 node : 4 GB RAM and 20GB disk
2 node : 64GB RAM with 4To Disk
I find that ELS tries to balance the same amount of data on each node.
The 2 smaller node are near full (disks and cpu) while the 2 biggers don't
do much work.
And so they crash often with OOM or others errors.
Is there any parameters like in hadoop to have the data distributed by %
instead of MB and so with the memory ?
Hello,
On one of my ELS cluster, i have node with different hardware capacity.
1 node : 8 GB RAM and 200GB disk
1 node : 4 GB RAM and 20GB disk
2 node : 64GB RAM with 4To Disk
I find that ELS tries to balance the same amount of data on each node.
The 2 smaller node are near full (disks and cpu) while the 2 biggers don't
do much work.
And so they crash often with OOM or others errors.
Is there any parameters like in hadoop to have the data distributed by %
instead of MB and so with the memory ?
Elasticsearch doesn't let you weight nodes for balance and the disk space
allocation decider really just puts soft limits on the amount of space
elasticsearch can take up per machine. There really isn't anything to do
it automatically.
You could use a combination of allocation awareness, total_shards_per_node,
and forcibly pinning the shards to machines to get something. Its the best
there is right now.
Hello,
On one of my ELS cluster, i have node with different hardware capacity.
1 node : 8 GB RAM and 200GB disk
1 node : 4 GB RAM and 20GB disk
2 node : 64GB RAM with 4To Disk
I find that ELS tries to balance the same amount of data on each node.
The 2 smaller node are near full (disks and cpu) while the 2 biggers
don't do much work.
And so they crash often with OOM or others errors.
Is there any parameters like in hadoop to have the data distributed by %
instead of MB and so with the memory ?
ok, thanks,so if i understand, the best is to have the same hardware
capacity for all the nodes involved in the cluster.
ELS need more polish with this, perhaps it will come later
Le mardi 11 novembre 2014 14:30:44 UTC+1, Nikolas Everett a écrit :
Elasticsearch doesn't let you weight nodes for balance and the disk space
allocation decider really just puts soft limits on the amount of space
elasticsearch can take up per machine. There really isn't anything to do
it automatically.
You could use a combination of allocation awareness,
total_shards_per_node, and forcibly pinning the shards to machines to get
something. Its the best there is right now.
On Tue, Nov 11, 2014 at 3:46 AM, Mark Walkom <markw...@gmail.com
<javascript:>> wrote:
You can balance, to a degree, based on disk space, but not heap/system
RAM.
There might be other options, like playing with shard allocation.
On 11 November 2014 19:43, lagarutte via elasticsearch < elasti...@googlegroups.com <javascript:>> wrote:
Hello,
On one of my ELS cluster, i have node with different hardware capacity.
1 node : 8 GB RAM and 200GB disk
1 node : 4 GB RAM and 20GB disk
2 node : 64GB RAM with 4To Disk
I find that ELS tries to balance the same amount of data on each node.
The 2 smaller node are near full (disks and cpu) while the 2 biggers
don't do much work.
And so they crash often with OOM or others errors.
Is there any parameters like in hadoop to have the data distributed by %
instead of MB and so with the memory ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.