Just initialize shards when problems but no rebalance

Hi is there any setting that I can put to ES that it automatically assigns
shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.

Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".

The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.

Kind regards

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You could do this, but it's a lot of manual overhead to have to deal with.
However ES does have some disk space awareness during allocation, take a
look at

On 15 January 2015 at 10:57, Matías Waisgold mwaisgold@gmail.com wrote:

Hi is there any setting that I can put to ES that it automatically assigns
shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.

Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".

The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.

Kind regards

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Yes, I've seen that but the problem is that when the threshold is reached
it removes all shards from the server instead of just removing 1 and
balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.

Another problem we are having is that in the file storage we see data from
shards that are not assigned to itself so it can´t allocate anything in
this dirty state.

2015-01-15 0:09 GMT-03:00 Mark Walkom markwalkom@gmail.com:

You could do this, but it's a lot of manual overhead to have to deal with.
However ES does have some disk space awareness during allocation, take a
look at
Elasticsearch Platform — Find real-time answers at scale | Elastic

On 15 January 2015 at 10:57, Matías Waisgold mwaisgold@gmail.com wrote:

Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.

Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".

The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.

Kind regards

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?

If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.

Kimbro

On Thu, Jan 15, 2015 at 6:14 AM, Matías Waisgold mwaisgold@gmail.com
wrote:

Yes, I've seen that but the problem is that when the threshold is reached
it removes all shards from the server instead of just removing 1 and
balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.

Another problem we are having is that in the file storage we see data from
shards that are not assigned to itself so it can´t allocate anything in
this dirty state.

2015-01-15 0:09 GMT-03:00 Mark Walkom markwalkom@gmail.com:

You could do this, but it's a lot of manual overhead to have to deal with.
However ES does have some disk space awareness during allocation, take a
look at
Elasticsearch Platform — Find real-time answers at scale | Elastic

On 15 January 2015 at 10:57, Matías Waisgold mwaisgold@gmail.com wrote:

Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.

Is it ok to combine cluster.routing.allocation.allow_rebalance to "none"
and cluster.routing.allocation.enable to "all".

The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.

Kind regards

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAA0DmXaW8AdZJhGPGTRqD%3DYCSQ%2B2JdM-oGGpxkRgi0BZLOw2rg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Great, thank you. We are creating another cluster with more disk space to
avoid this situations.
By any chance do you have the link to the issue?

2015-01-15 13:26 GMT-03:00 Kimbro Staken kstaken@kstaken.com:

I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?

If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.

Kimbro

On Thu, Jan 15, 2015 at 6:14 AM, Matías Waisgold mwaisgold@gmail.com
wrote:

Yes, I've seen that but the problem is that when the threshold is reached
it removes all shards from the server instead of just removing 1 and
balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.

Another problem we are having is that in the file storage we see data
from shards that are not assigned to itself so it can´t allocate anything
in this dirty state.

2015-01-15 0:09 GMT-03:00 Mark Walkom markwalkom@gmail.com:

You could do this, but it's a lot of manual overhead to have to deal
with.
However ES does have some disk space awareness during allocation, take a
look at
Elasticsearch Platform — Find real-time answers at scale | Elastic

On 15 January 2015 at 10:57, Matías Waisgold mwaisgold@gmail.com
wrote:

Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.

Is it ok to combine cluster.routing.allocation.allow_rebalance to
"none" and cluster.routing.allocation.enable to "all".

The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.

Kind regards

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXaW8AdZJhGPGTRqD%3DYCSQ%2B2JdM-oGGpxkRgi0BZLOw2rg%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXaW8AdZJhGPGTRqD%3DYCSQ%2B2JdM-oGGpxkRgi0BZLOw2rg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMaTqYq%2B_S1_CaPJ-_HN8F%3DizX9VjKmF_W5dhDFpAmS6kxw0WA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

So is this still happening with 1.4.2?

Here's the ticket. Looks like the fix was supposed to be in 1.4.1

On Thu, Jan 15, 2015 at 10:55 AM, Matías Waisgold mwaisgold@gmail.com
wrote:

Great, thank you. We are creating another cluster with more disk space to
avoid this situations.
By any chance do you have the link to the issue?

2015-01-15 13:26 GMT-03:00 Kimbro Staken kstaken@kstaken.com:

I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?

If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.

Kimbro

On Thu, Jan 15, 2015 at 6:14 AM, Matías Waisgold mwaisgold@gmail.com
wrote:

Yes, I've seen that but the problem is that when the threshold is
reached it removes all shards from the server instead of just removing 1
and balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.

Another problem we are having is that in the file storage we see data
from shards that are not assigned to itself so it can´t allocate anything
in this dirty state.

2015-01-15 0:09 GMT-03:00 Mark Walkom markwalkom@gmail.com:

You could do this, but it's a lot of manual overhead to have to deal
with.
However ES does have some disk space awareness during allocation, take
a look at
Elasticsearch Platform — Find real-time answers at scale | Elastic

On 15 January 2015 at 10:57, Matías Waisgold mwaisgold@gmail.com
wrote:

Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it
manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.

Is it ok to combine cluster.routing.allocation.allow_rebalance to
"none" and cluster.routing.allocation.enable to "all".

The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.

Kind regards

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXaW8AdZJhGPGTRqD%3DYCSQ%2B2JdM-oGGpxkRgi0BZLOw2rg%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXaW8AdZJhGPGTRqD%3DYCSQ%2B2JdM-oGGpxkRgi0BZLOw2rg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYq%2B_S1_CaPJ-_HN8F%3DizX9VjKmF_W5dhDFpAmS6kxw0WA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYq%2B_S1_CaPJ-_HN8F%3DizX9VjKmF_W5dhDFpAmS6kxw0WA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAA0DmXar33evahvzhzbQ2zgO_kv%3Du4YtMaUMfZcj9MNJWC%3D_MA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I'm on 1.4.1 and still seeing the same behavior.
There should be a better practice than remove all shards at the same time
and try to move a few.
We are going to apply the same solution you mentioned, add more disk.
Thank's for your help.

2015-01-15 16:09 GMT-03:00 Kimbro Staken kstaken@kstaken.com:

So is this still happening with 1.4.2?

Here's the ticket. Looks like the fix was supposed to be in 1.4.1

disk.watermark.high relocates all shards creating a relocation storm · Issue #8538 · elastic/elasticsearch · GitHub

On Thu, Jan 15, 2015 at 10:55 AM, Matías Waisgold mwaisgold@gmail.com
wrote:

Great, thank you. We are creating another cluster with more disk space to
avoid this situations.
By any chance do you have the link to the issue?

2015-01-15 13:26 GMT-03:00 Kimbro Staken kstaken@kstaken.com:

I've experienced what you're describing. I called it a "shard relocation
storm" and it's really tough to get under control. I opened a ticket on the
issue and a fix was supposedly included in 1.4.2. What version are you
running?

If you want to truly manually manage this situation you could set
cluster.routing.allocation.disk.threshold_enabled to false but that will
likely cause other issues. I ended up just setting
cluster.routing.allocation.disk.watermark.high to a really low value and
actively managed shard allocations to prevent nodes from getting anywhere
near that value. This is tricky as the way ES allocates shards it can
easily run nodes out of disk if you're regularly creating new indices and
those grow rapidly.

Kimbro

On Thu, Jan 15, 2015 at 6:14 AM, Matías Waisgold mwaisgold@gmail.com
wrote:

Yes, I've seen that but the problem is that when the threshold is
reached it removes all shards from the server instead of just removing 1
and balance. And when that happens the cluster starts to move shards over
everywhere and it never stops.

Another problem we are having is that in the file storage we see data
from shards that are not assigned to itself so it can´t allocate anything
in this dirty state.

2015-01-15 0:09 GMT-03:00 Mark Walkom markwalkom@gmail.com:

You could do this, but it's a lot of manual overhead to have to deal
with.
However ES does have some disk space awareness during allocation, take
a look at
Elasticsearch Platform — Find real-time answers at scale | Elastic

On 15 January 2015 at 10:57, Matías Waisgold mwaisgold@gmail.com
wrote:

Hi is there any setting that I can put to ES that it automatically
assigns shards that are unassigned but never ever rebalance the cluster?
I´ve found several issues when rebalancing and prefer to do it
manually.
If I set cluster.routing.allocation.enable to "none" nothing happens.
If I set it to "all" then it starts rebalancing.

Is it ok to combine cluster.routing.allocation.allow_rebalance to
"none" and cluster.routing.allocation.enable to "all".

The issue is mainly because we are running low on disk and when that
happens elasticsearch removes all shards from an instance, that doesn´t
care about cluster.routing.allocation.cluster_concurrent_rebalance and
starts moving shards like crazy around the entire cluster, filling the
storage on other instances in the way that it will never stop balancing.

Kind regards

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/666a4d70-2497-4a2b-8c5e-774c7d0617b7%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8KXKpmnAPWvr8a_Mgny75KkkKxRFP_bJVhQL20bhR0UQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqFmk8t7couOmYEyPYNZPKepT8nKVrCM6fvSPW0CUjMwA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXaW8AdZJhGPGTRqD%3DYCSQ%2B2JdM-oGGpxkRgi0BZLOw2rg%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXaW8AdZJhGPGTRqD%3DYCSQ%2B2JdM-oGGpxkRgi0BZLOw2rg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYq%2B_S1_CaPJ-_HN8F%3DizX9VjKmF_W5dhDFpAmS6kxw0WA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAMaTqYq%2B_S1_CaPJ-_HN8F%3DizX9VjKmF_W5dhDFpAmS6kxw0WA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/CHqlig1M-T0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXar33evahvzhzbQ2zgO_kv%3Du4YtMaUMfZcj9MNJWC%3D_MA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXar33evahvzhzbQ2zgO_kv%3Du4YtMaUMfZcj9MNJWC%3D_MA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMaTqYqTcALa3zKGVLXmobjbFkEMMR8L03xVR9r-1pBWsn451A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.