Elasticsearch 5.1.1 update thread_pool setting



I've run cluster of ES 1.5 & ES 2.4 in production and ES 5.1.1 in stagging. On ES 5.1.1, I'm failed to update thread_pool with query like:

curl -XPUT localhost:9200/_cluster/settings -d '
"persistent" : {
"thread_pool" : {
"search" : {
"queue_size" : "300"

It return error :

"error": {
"root_cause": [
"type": "illegal_argument_exception",
"reason": "persistent setting [thread_pool.search.queue_size], not dynamically updateable"
"type": "illegal_argument_exception",
"reason": "persistent setting [thread_pool.search.queue_size], not dynamically updateable"
"status": 400

While, on both ES 1.5 & 2.4 thread pool setting can be update dynamically with query above (change 'thread_pool' to 'threadpool')

I don't know whether thread pool can't be update on ES 5.1.1. Do I need to set in elasticsearch.yml on each node? Please your advice


Transient setting [threadpool.search.queue_size], not dynamically updateable
(Jason Tedor) #2

Thread pool settings are now node-level settings and are therefore not dynamically updatable. You must add these settings to your node configuration and restart for them to take effect. This is covered in the migration docs.


Sadly, so adjust it will take long time especially for large cluster with big data on each node due to shard rebalancing & shard sync.
Hopefully it can be update dinamically for next version.

(Jason Tedor) #4

This should not be true with delayed allocation.

It will not.


As far as I know, when there is update, even only index 1 document on primary shard during restarting the node, it will copy shard from primary (not the delta size).

For example, let say if the restarted node handle 3 replication shards, 10GB each. During restarting node, those primary shards receive updated query, then the node will copy 3x10GB from its primary shard, right? #CMIIW

(Jason Tedor) #6

Not quite. For example, if you sync flush before restarting and the sync ID is not invalided (for example, there is not another flush on the index), writes can still take place and not trigger a complete file copy during recovery if the restarted node comes back before the delayed allocation window expires. Instead, only the translog phase of recovery will be replayed which will copy the missed operations from the primary to the replica. However, if the sync ID is invalidated then a full copy is required.

Also, we are working on dramatically improving this (I literally wrote code for this this week: https://github.com/jasontedor/elasticsearch/commit/feee9f6beda37ea109675ccf2baf05d190f8dc34). In the future, we will be able to do a document-based recovery meaning that we can replay all the missed operations from the primary, even if there was a flush invalidating the sync ID. For more details of the work involved see: https://github.com/elastic/elasticsearch/issues/10708

Note the last line of the first paragraph:

Internally we could use this ordering to speed up shard recoveries, by identifying which specific operations need to be replayed to the recovering replica instead of falling back to a file based sync.

(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.