Hi is there any setting that I can put to ES that it automatically assigns
shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.
Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".
The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.
You could do this, but it's a lot of manual overhead to have to deal with.
However ES does have some disk space awareness during allocation, take a
look at
Hi is there any setting that I can put to ES that it automatically assigns
shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.
Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".
The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.
Yes, I've seen that but the problem is that when the threshold is reached
it removes all shards from the server instead of just removing 1 and
balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.
Another problem we are having is that in the file storage we see data from
shards that are not assigned to itself so it can´t allocate anything in
this dirty state.
Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.
Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".
The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.
I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?
If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.
Yes, I've seen that but the problem is that when the threshold is reached
it removes all shards from the server instead of just removing 1 and
balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.
Another problem we are having is that in the file storage we see data from
shards that are not assigned to itself so it can´t allocate anything in
this dirty state.
Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.
Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".
The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.
I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?
If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.
Yes, I've seen that but the problem is that when the threshold is reached
it removes all shards from the server instead of just removing 1 and
balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.
Another problem we are having is that in the file storage we see data
from shards that are not assigned to itself so it can´t allocate anything
in this dirty state.
Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.
Is it ok to combine cluster.routing.allocation.allow_rebalance to
"none" and cluster.routing.allocation.enable to "all".
The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.
I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?
If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.
Yes, I've seen that but the problem is that when the threshold is
reached it removes all shards from the server instead of just removing 1
and balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.
Another problem we are having is that in the file storage we see data
from shards that are not assigned to itself so it can´t allocate anything
in this dirty state.
Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it
manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.
Is it ok to combine cluster.routing.allocation.allow_rebalance to
"none" and cluster.routing.allocation.enable to "all".
The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.
I'm on 1.4.1 and still seeing the same behavior.
There should be a better practice than remove all shards at the same time
and try to move a few.
We are going to apply the same solution you mentioned, add more disk.
Thank's for your help.
I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?
If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.
Yes, I've seen that but the problem is that when the threshold is
reached it removes all shards from the server instead of just removing 1
and balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.
Another problem we are having is that in the file storage we see data
from shards that are not assigned to itself so it can´t allocate anything
in this dirty state.
Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it
manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.
Is it ok to combine cluster.routing.allocation.allow_rebalance to
"none" and cluster.routing.allocation.enable to "all".
The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.