Threadpool cluster persistent settings 1.7.3

Hello,

We are using ES 1.7.3 - primarily as an execution engine using percolation. Rules we have uses significant CPU resources and hence we have powerful 40 core machines. Looking at the default threadpool settings using

curl 'localhost:9200/_cat/thread_pool?v&h=percolate.size,percolate.max,percolate.type,percolate.min,percolate.max,h'

i get 32 as the value. I am trying to update it to 40 using

curl -XPUT localhost:9200/_cluster/settings -d '
{
  "persistent" : {
	"threadpool.percolate.type" : "fixed",
	"threadpool.percolate.max" : 40
   }
}'

on

curl -XGET localhost:9200/_cluster/settings?pretty
{
  "persistent": {
    "threadpool": {
      "percolate": {
          "type": "fixed",
         "max": "40"
         }
     }
  },
  "transient": {}
}

but when i look at

curl 'localhost:9200/_cat/thread_pool?v&h=percolate.size,percolate.max,percolate.type,percolate.min,percolate.max,h'

it still shows it at 32. I have tried a restart but still same problem. Why is the value not reflected in the thread_pool cat query ? Any help would be greatly appreciated. Thanks in advance !!

I think you need to use size and not max. max is for non fixed thread pools IIRC.

Yes. This is what I meant. Read this: https://www.elastic.co/guide/en/elasticsearch/reference/1.7/modules-threadpool.html#_literal_fixed_literal

Thanks that worked.

Another question the link says its number of cores times 5 but that would be 40 * 5 = 200. Should i set such a high number ? or limit it to 1 per core.

I suspect that you have troubles as you want to change default values.
If so, you can imagine dedicating some nodes for percolation only.

Basically, use this: https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html
Send your "percolator" indices to those nodes.

Then test again with the default configuration. And monitor the CPU usage to see if you are reaching the limit.
If not, you can increase the thread pool size or add more nodes to spread the load on more nodes.

But I'm speaking here without any knowledge about issues you may have today. Feel free to add more details about your setup and what you are doing.

Thanks a lot @dadoonet , we are using es only as rule engine no data just percolation queries and matching them against docs we throw at them. The most cpu intensive aspect i think is the geo filter we are using, typical queries have about 2000-3000 such filters but really large ones go upto 40000 such filters in percolation queries. Our target was given to be 1500 points matching but now business is saying they want about 40k-50k per query and hence we are trying to scale cluster horizontally, we have powerful machines now but threads was seeming a bit wierd.

Ty for the help, looks like more changes and monitoring for now.