Inter-thread-pool concurrency


I see in the docs that some thread pools have size greater or equal to # of available processors. That totally makes sense in case of mainly-indexing or mainly-searching workload - all CPU resources are utilized. But what about cases such as indexing + searching. I suppose there would be at least (2 * # of available processors) runnable threads which leads to lots of context switches and degraded performance. Given that I think eventually ES would benefit of using thread scheduler which keeps number of runnable threads close to # of available processors. A good example might be Hadoop fair scheduler (although it schedules not threads but processes cluster-wide). Are my reasonings correct?


p.s. presuming I/O is handled by separate threads and we're speaking about CPU-bound workload.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.