We are using ES 1.7.3 - primarily as an execution engine using percolation. Rules we have uses significant CPU resources and hence we have powerful 40 core machines. Looking at the default threadpool settings using
it still shows it at 32. I have tried a restart but still same problem. Why is the value not reflected in the thread_pool cat query ? Any help would be greatly appreciated. Thanks in advance !!
Another question the link says its number of cores times 5 but that would be 40 * 5 = 200. Should i set such a high number ? or limit it to 1 per core.
Then test again with the default configuration. And monitor the CPU usage to see if you are reaching the limit.
If not, you can increase the thread pool size or add more nodes to spread the load on more nodes.
But I'm speaking here without any knowledge about issues you may have today. Feel free to add more details about your setup and what you are doing.
Thanks a lot @dadoonet , we are using es only as rule engine no data just percolation queries and matching them against docs we throw at them. The most cpu intensive aspect i think is the geo filter we are using, typical queries have about 2000-3000 such filters but really large ones go upto 40000 such filters in percolation queries. Our target was given to be 1500 points matching but now business is saying they want about 40k-50k per query and hence we are trying to scale cluster horizontally, we have powerful machines now but threads was seeming a bit wierd.
Ty for the help, looks like more changes and monitoring for now.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.