Cluster has pending tasks. Cannot update cluster settings

Hello,

We've managed to get our cluster into a state where it has pending tasks
and we unable to update cluster settings.

Has anyone else experienced this and be able to help us to remove these
tasks? We're thinking the alternative option is to kill the cluster and
start again but we'd like to avoid this.

We got to this state by making changes to the threadpool search queue size.
It's default is 1000 to unbounded (-1) which applied without issue. We ran
a load test on our application. Next we changed the threadpool search queue
size to 500 but this looks like it did not complete although we can see it
has been applied in the :9200/_cluster/setting.

The curl command we used to apply the thread pool settings was:
curl -XPUT :9200/_cluster/settings -d '{ "transient" : {
"threadpool.search.queue_size" : 500}}'

Logs for updating the threadpool search queue size from 1000 to -1:
2013-12-03 14:04:44,515 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [-1]
2013-12-03 14:04:44,516 DEBUG cluster.service [X-Man] processing
[zen-disco-receive(from master [[Cap 'N
Hawk][1SA1x8JwRHCk5xm9_MmLGA][inet[:9300]]])]: done applying updated
cluster_state (version: 1598)

Logs for updating the threadpool search queue from -1 to 500 (Note it never
applied.):
2013-12-03 15:13:32,901 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [500]

When we've tried to update the threadpool search queue size again we get
see it being added to the pending tasks list but is then rejected. The
error we get is:-
ProcessClusterEventTimeoutException[failed to process cluster event
(cluster_update_settings) within 30s]; ","status":503}

Looking at the pending tasks (:9200/_cluster/pending_tasks) shows
(the cluster_update_settings goes in with a higher insert order number than
the ones listed below):-
{

"tasks": [

{

"insert_order": 33,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 82081320,

"time_in_queue": "22.8h"
},

{

"insert_order": 40,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 64831934,

"time_in_queue": "18h"
}

]

}

We are running Elastic Search 0.90.7.

Thanks,
Jenny

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7d2767ed-1256-470c-9825-f34e63a28a2c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

We ended up following the workaround listed on github to resolve this:-

We had to shut the cluster down, removed the global state file and
restarted Elasticsearch. Going forward we'll be using the
elasticsearch.yml file to apply these changes.

On Wednesday, 4 December 2013 15:51:24 UTC, Jenny Sivapalan wrote:

Hello,

We've managed to get our cluster into a state where it has pending tasks
and we unable to update cluster settings.

Has anyone else experienced this and be able to help us to remove these
tasks? We're thinking the alternative option is to kill the cluster and
start again but we'd like to avoid this.

We got to this state by making changes to the threadpool search queue
size. It's default is 1000 to unbounded (-1) which applied without issue.
We ran a load test on our application. Next we changed the threadpool
search queue size to 500 but this looks like it did not complete although
we can see it has been applied in the :9200/_cluster/setting.

The curl command we used to apply the thread pool settings was:
curl -XPUT :9200/_cluster/settings -d '{ "transient" : {
"threadpool.search.queue_size" : 500}}'

Logs for updating the threadpool search queue size from 1000 to -1:
2013-12-03 14:04:44,515 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [-1]
2013-12-03 14:04:44,516 DEBUG cluster.service [X-Man] processing
[zen-disco-receive(from master [[Cap 'N
Hawk][1SA1x8JwRHCk5xm9_MmLGA][inet[:9300]]])]: done applying updated
cluster_state (version: 1598)

Logs for updating the threadpool search queue from -1 to 500 (Note it
never applied.):
2013-12-03 15:13:32,901 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [500]

When we've tried to update the threadpool search queue size again we get
see it being added to the pending tasks list but is then rejected. The
error we get is:-
ProcessClusterEventTimeoutException[failed to process cluster event
(cluster_update_settings) within 30s]; ","status":503}

Looking at the pending tasks (:9200/_cluster/pending_tasks) shows
(the cluster_update_settings goes in with a higher insert order number
than the ones listed below):-
{

"tasks": [

{

"insert_order": 33,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 82081320,

"time_in_queue": "22.8h"
},

{

"insert_order": 40,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 64831934,

"time_in_queue": "18h"
}

]

}

We are running Elastic Search 0.90.7.

Thanks,
Jenny

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/473ab223-68f9-4866-a98f-cb724ad06197%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Jenny,

We are running same es version as you and updated cluster transient
configuration with no problem. Can you tell what jvm are you running and
the version?

/Jason

On Thu, Dec 5, 2013 at 10:36 PM, Jenny Sivapalan <
jennifer.sivapalan@gmail.com> wrote:

We ended up following the workaround listed on github to resolve this:-
No obvious way to delete persistent cluster admin settings · Issue #3670 · elastic/elasticsearch · GitHub

We had to shut the cluster down, removed the global state file and
restarted Elasticsearch. Going forward we'll be using the
elasticsearch.yml file to apply these changes.

On Wednesday, 4 December 2013 15:51:24 UTC, Jenny Sivapalan wrote:

Hello,

We've managed to get our cluster into a state where it has pending tasks
and we unable to update cluster settings.

Has anyone else experienced this and be able to help us to remove these
tasks? We're thinking the alternative option is to kill the cluster and
start again but we'd like to avoid this.

We got to this state by making changes to the threadpool search queue
size. It's default is 1000 to unbounded (-1) which applied without issue.
We ran a load test on our application. Next we changed the threadpool
search queue size to 500 but this looks like it did not complete although
we can see it has been applied in the :9200/_cluster/setting.

The curl command we used to apply the thread pool settings was:
curl -XPUT :9200/_cluster/settings -d '{ "transient" : {
"threadpool.search.queue_size" : 500}}'

Logs for updating the threadpool search queue size from 1000 to -1:
2013-12-03 14:04:44,515 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [-1]
2013-12-03 14:04:44,516 DEBUG cluster.service [X-Man]
processing [zen-disco-receive(from master [[Cap 'N
Hawk][1SA1x8JwRHCk5xm9_MmLGA][inet[:9300]]])]: done applying updated
cluster_state (version: 1598)

Logs for updating the threadpool search queue from -1 to 500 (Note it
never applied.):
2013-12-03 15:13:32,901 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [500]

When we've tried to update the threadpool search queue size again we get
see it being added to the pending tasks list but is then rejected. The
error we get is:-
ProcessClusterEventTimeoutException[failed to process cluster event
(cluster_update_settings) within 30s]; ","status":503}

Looking at the pending tasks (:9200/_cluster/pending_tasks)
shows (the cluster_update_settings goes in with a higher insert order
number than the ones listed below):-
{

"tasks": [

{

"insert_order": 33,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 82081320,

"time_in_queue": "22.8h"
},

{

"insert_order": 40,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 64831934,

"time_in_queue": "18h"
}

]

}

We are running Elastic Search 0.90.7.

Thanks,
Jenny

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/473ab223-68f9-4866-a98f-cb724ad06197%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4itydBXjb9mnk2u9mSVYYve3wG6XhbUgU8OywmiaHXR6ErQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Jason,

We're running:-
java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.6) (6b27-1.12.6-1ubuntu0.12.10.2)

Thanks,
Jenny

On Thursday, 5 December 2013 20:31:17 UTC, Jason Wee wrote:

Hi Jenny,

We are running same es version as you and updated cluster transient
configuration with no problem. Can you tell what jvm are you running and
the version?

/Jason

On Thu, Dec 5, 2013 at 10:36 PM, Jenny Sivapalan <jennifer....@gmail.com<javascript:>

wrote:

We ended up following the workaround listed on github to resolve this:-
No obvious way to delete persistent cluster admin settings · Issue #3670 · elastic/elasticsearch · GitHub

We had to shut the cluster down, removed the global state file and
restarted Elasticsearch. Going forward we'll be using the
elasticsearch.yml file to apply these changes.

On Wednesday, 4 December 2013 15:51:24 UTC, Jenny Sivapalan wrote:

Hello,

We've managed to get our cluster into a state where it has pending tasks
and we unable to update cluster settings.

Has anyone else experienced this and be able to help us to remove these
tasks? We're thinking the alternative option is to kill the cluster and
start again but we'd like to avoid this.

We got to this state by making changes to the threadpool search queue
size. It's default is 1000 to unbounded (-1) which applied without issue.
We ran a load test on our application. Next we changed the threadpool
search queue size to 500 but this looks like it did not complete although
we can see it has been applied in the :9200/_cluster/setting.

The curl command we used to apply the thread pool settings was:
curl -XPUT :9200/_cluster/settings -d '{ "transient" : {
"threadpool.search.queue_size" : 500}}'

Logs for updating the threadpool search queue size from 1000 to -1:
2013-12-03 14:04:44,515 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [-1]
2013-12-03 14:04:44,516 DEBUG cluster.service [X-Man]
processing [zen-disco-receive(from master [[Cap 'N
Hawk][1SA1x8JwRHCk5xm9_MmLGA][inet[:9300]]])]: done applying
updated cluster_state (version: 1598)

Logs for updating the threadpool search queue from -1 to 500 (Note it
never applied.):
2013-12-03 15:13:32,901 DEBUG threadpool [X-Man] creating
thread_pool [search], type [fixed], size [40], queue_size [500]

When we've tried to update the threadpool search queue size again we get
see it being added to the pending tasks list but is then rejected. The
error we get is:-
ProcessClusterEventTimeoutException[failed to process cluster event
(cluster_update_settings) within 30s]; ","status":503}

Looking at the pending tasks (:9200/_cluster/pending_tasks)
shows (the cluster_update_settings goes in with a higher insert order
number than the ones listed below):-
{

"tasks": [

{

"insert_order": 33,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 82081320,

"time_in_queue": "22.8h"
},

{

"insert_order": 40,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 64831934,

"time_in_queue": "18h"
}

]

}

We are running Elastic Search 0.90.7.

Thanks,
Jenny

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/473ab223-68f9-4866-a98f-cb724ad06197%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e77caf15-242a-44d1-8a25-41dadee1e3b3%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hmm... maybe you want to run the oracle jvm. We have a lot in common! We
are running jvm version 1.6.0_25 . Not exactly sure where is your problem
but the official doc

recommend
oracle jvm. You might want to consider switch that , see if the problem
arise later day.

/Jason

On Sat, Dec 7, 2013 at 12:49 AM, Jenny Sivapalan <
jennifer.sivapalan@gmail.com> wrote:

Hi Jason,

We're running:-
java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.6)
(6b27-1.12.6-1ubuntu0.12.10.2)

Thanks,
Jenny

On Thursday, 5 December 2013 20:31:17 UTC, Jason Wee wrote:

Hi Jenny,

We are running same es version as you and updated cluster transient
configuration with no problem. Can you tell what jvm are you running and
the version?

/Jason

On Thu, Dec 5, 2013 at 10:36 PM, Jenny Sivapalan jennifer....@gmail.comwrote:

We ended up following the workaround listed on github to resolve this:-
No obvious way to delete persistent cluster admin settings · Issue #3670 · elastic/elasticsearch · GitHub

We had to shut the cluster down, removed the global state file and
restarted Elasticsearch. Going forward we'll be using the
elasticsearch.yml file to apply these changes.

On Wednesday, 4 December 2013 15:51:24 UTC, Jenny Sivapalan wrote:

Hello,

We've managed to get our cluster into a state where it has pending
tasks and we unable to update cluster settings.

Has anyone else experienced this and be able to help us to remove these
tasks? We're thinking the alternative option is to kill the cluster and
start again but we'd like to avoid this.

We got to this state by making changes to the threadpool search queue
size. It's default is 1000 to unbounded (-1) which applied without issue.
We ran a load test on our application. Next we changed the threadpool
search queue size to 500 but this looks like it did not complete although
we can see it has been applied in the :9200/_cluster/setting.

The curl command we used to apply the thread pool settings was:
curl -XPUT :9200/_cluster/settings -d '{ "transient" : {
"threadpool.search.queue_size" : 500}}'

Logs for updating the threadpool search queue size from 1000 to -1:
2013-12-03 14:04:44,515 DEBUG threadpool [X-Man]
creating thread_pool [search], type [fixed], size [40], queue_size [-1]
2013-12-03 14:04:44,516 DEBUG cluster.service [X-Man]
processing [zen-disco-receive(from master [[Cap 'N
Hawk][1SA1x8JwRHCk5xm9_MmLGA][inet[:9300]]])]: done applying
updated cluster_state (version: 1598)

Logs for updating the threadpool search queue from -1 to 500 (Note it
never applied.):
2013-12-03 15:13:32,901 DEBUG threadpool [X-Man]
creating thread_pool [search], type [fixed], size [40], queue_size [500]

When we've tried to update the threadpool search queue size again we
get see it being added to the pending tasks list but is then rejected. The
error we get is:-
ProcessClusterEventTimeoutException[failed to process cluster event
(cluster_update_settings) within 30s]; ","status":503}

Looking at the pending tasks (:9200/_cluster/pending_tasks)
shows (the cluster_update_settings goes in with a higher insert order
number than the ones listed below):-
{

"tasks": [

{

"insert_order": 33,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 82081320,

"time_in_queue": "22.8h"
},

{

"insert_order": 40,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 64831934,

"time_in_queue": "18h"
}

]

}

We are running Elastic Search 0.90.7.

Thanks,
Jenny

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/473ab223-68f9-4866-a98f-cb724ad06197%
40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e77caf15-242a-44d1-8a25-41dadee1e3b3%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4itzHZR4hh%3DMZU9kiSor3mRicKNA_yt3YzigmE9WLEsQw%3Dw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Ah sorry I managed to paste in my local developer box details rather than
production version. In production (where we had the issue) we're using
java version "1.7.0_25"

Which should be fine.

Thanks,
Jenny

On Friday, 6 December 2013 17:15:23 UTC, Jason Wee wrote:

Hmm... maybe you want to run the oracle jvm. We have a lot in common! We
are running jvm version 1.6.0_25 . Not exactly sure where is your problem
but the official doc
Elasticsearch Platform — Find real-time answers at scale | Elastic recommend
oracle jvm. You might want to consider switch that , see if the problem
arise later day.

/Jason

On Sat, Dec 7, 2013 at 12:49 AM, Jenny Sivapalan <jennifer....@gmail.com<javascript:>

wrote:

Hi Jason,

We're running:-
java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.6)
(6b27-1.12.6-1ubuntu0.12.10.2)

Thanks,
Jenny

On Thursday, 5 December 2013 20:31:17 UTC, Jason Wee wrote:

Hi Jenny,

We are running same es version as you and updated cluster transient
configuration with no problem. Can you tell what jvm are you running and
the version?

/Jason

On Thu, Dec 5, 2013 at 10:36 PM, Jenny Sivapalan <jennifer....@gmail.com

wrote:

We ended up following the workaround listed on github to resolve this:-
No obvious way to delete persistent cluster admin settings · Issue #3670 · elastic/elasticsearch · GitHub

We had to shut the cluster down, removed the global state file and
restarted Elasticsearch. Going forward we'll be using the
elasticsearch.yml file to apply these changes.

On Wednesday, 4 December 2013 15:51:24 UTC, Jenny Sivapalan wrote:

Hello,

We've managed to get our cluster into a state where it has pending
tasks and we unable to update cluster settings.

Has anyone else experienced this and be able to help us to remove
these tasks? We're thinking the alternative option is to kill the cluster
and start again but we'd like to avoid this.

We got to this state by making changes to the threadpool search queue
size. It's default is 1000 to unbounded (-1) which applied without issue.
We ran a load test on our application. Next we changed the threadpool
search queue size to 500 but this looks like it did not complete although
we can see it has been applied in the :9200/_cluster/s
etting.

The curl command we used to apply the thread pool settings was:
curl -XPUT :9200/_cluster/settings -d '{ "transient" : {
"threadpool.search.queue_size" : 500}}'

Logs for updating the threadpool search queue size from 1000 to -1:
2013-12-03 14:04:44,515 DEBUG threadpool [X-Man]
creating thread_pool [search], type [fixed], size [40], queue_size [-1]
2013-12-03 14:04:44,516 DEBUG cluster.service [X-Man]
processing [zen-disco-receive(from master [[Cap 'N
Hawk][1SA1x8JwRHCk5xm9_MmLGA][inet[:9300]]])]: done applying
updated cluster_state (version: 1598)

Logs for updating the threadpool search queue from -1 to 500 (Note it
never applied.):
2013-12-03 15:13:32,901 DEBUG threadpool [X-Man]
creating thread_pool [search], type [fixed], size [40], queue_size [500]

When we've tried to update the threadpool search queue size again we
get see it being added to the pending tasks list but is then rejected. The
error we get is:-
ProcessClusterEventTimeoutException[failed to process cluster event
(cluster_update_settings) within 30s]; ","status":503}

Looking at the pending tasks (:9200/_cluster/pending_tasks)
shows (the cluster_update_settings goes in with a higher insert order
number than the ones listed below):-
{

"tasks": [

{

"insert_order": 33,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 82081320,

"time_in_queue": "22.8h"
},

{

"insert_order": 40,

"priority": "HIGH",

"source": "update-mapping [][]",

"time_in_queue_millis": 64831934,

"time_in_queue": "18h"
}

]

}

We are running Elastic Search 0.90.7.

Thanks,
Jenny

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/473ab223-68f9-4866-a98f-cb724ad06197%
40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e77caf15-242a-44d1-8a25-41dadee1e3b3%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/da6971b2-98c1-49c7-8e28-b1873f1fe50a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.