Search_coordination thread queue filling up. Can we change the setting?

Hello,

We are operating some Elasticsearch clusters and have noticed in the most queried cluster periodically the search_coordination thread pool can't keep up, resulting in a queue that then translates to high latency for the end consumer.

For reference, the cluster in question has 3 master nodes, 6 coordinator nodes, and ~50 data nodes. Interestingly, I would have suspected the issue to occur on the coordinator nodes due to the pool naming but no, the pool is being saturated on the data nodes. The data nodes are pretty large, over a dozen cores and a couple hundred GB of memory with fast 1TB SSD disks attached.

In the documentation I see the below for search_coordination default sizing:

For lightweight search-related coordination operations. Thread pool type is fixed with a size of a max of min(5, ( # of allocated processors) / 2) , and queue_size of 1000 .

The wording is a bit confusing but if I am interpreting it correctly, the max number of threads by default is 5. My question is, can we increase the number of threads dedicated to search_coordination (I believe that is a 'yes' via the elasticsearch.yml file) and my second question is should we increase the static size?

From what I have read, tuning the defaults is not recommended but the threads not keeping up with demand is causing consumer impact. Is increasing dedicated thread pool size a good idea? Is there something else that is lower hanging fruit? Going to more numerous but smaller data nodes perhaps?

Thank you

Welcome to our community! :smiley:

That means your node cannot keep up, not the thread pool. Any idea what these errors mean version 2.4.2 - #5 by jasontedor is a really good post on this.

It's a min of 5, but a max of half of the number of processors on the node. So a 32 core CPU will have 16 threads.
And then the node will hold 1000 requests in it's queue before rejecting them.

Hi @warkolm,

I appreciate your response. I had read that post you linked and it definitely makes sense why one would not want to increase the queue size as that just blankets the problem and can make the situation worse. My apologies if I was not clear, my intention is not to increase the queue size but rather the number of search_coordination threads i.e the size parameter


Regarding the number of search coordination threads:

It's a min of 5, but a max of half of the number of processors on the node. So a 32 core CPU will have 16 threads.

Unless I am missing something or maybe the docs are incorrect but the formula listed for the default # of search coordination threads is min(5, ( # of allocated processors) / 2)

If we were to plug some hypothetical values in for # procs:

4 processors: min( 5, (4/2) ) simplified to min( 5, 2 ) = 2

10 processors: min( 5, (10/2) ) simplified to min( 5, 5 ) = 5

16 processors (this is the value we give elastic in our data nodes): min( 5, (16/2) ) simplified to min( 5, 8 ) = 5 threads

Am I missing something? The max value that equation produces is 5. This also seems to be reflected in reality as if I query _nodes/thread_pool on the cluster I see the relevant portion of the result as:

  "search_coordination": {
      "type": "fixed",
      "size": 5,
      "queue_size": 1000
  },

even when available_processors is (correctly) reflected as 16 in the _nodes/os output.


Regarding the # of threads not being able to keep up you mentioned:

That means your node cannot keep up, not the thread pool.

What can be done to alleviate this? Is this a common issue? We do not see a (non-negligible) queue for any of the other pools, we do not see CPU usage spiking more than ~55%, the only thing we see is the search coordination thread pool being fully exhausted.

In my naive mind the solution would be to increase the thread pool size or to increase the number of data nodes, the latter would have cost drawbacks and seems wasteful when we are not utilizing even 60% of our allocated CPU. (Slight note on this, this cluster was set up before I was on the team and our underlying nodes actually have 32 cores but only 16 are being requested by our ECK pods so in reality we are ~10 of 32 cores, another example why I am hesitant to scale horizontally for CPU reasons)

It has been a long day of digging so very possible my brain is missing something obvious. Appreciate your reply and look forward to a response

How many indices and shards does a query typically access? What is your average shard size? How many concurrent queries do you typically have?

Hi @Christian_Dahlqvist,

Thanks for your response. We have a wide variety of workloads running. Some hit as little as 1 index and some hit as many as 8 indices. For number of shards I can't give a specific number but some queries touch all shards in an index sometimes and our bigger indices are ~20-60 shards typically. We have recently began doing better with resharding and most of our shards are under 50 GB as we target 45 GB right now though one of the ~20 shard indices has a shard size of 160 GB. We haven't had any issues with that index while using it at high volume for months until we hit this (what seems like a) saturation point with the search coordination pool. We have high volume on this cluster, typically around 6-8k QPS and sometimes up to 15k QPS. All of our metrics look good and our latencies are really good until we start using all the search coordination threads across the cluster. At that point the queue begins to fill up (it doesn't hit the max ever that we have seen) and our P99s go from a few hundred ms to a few seconds.

Appreciate any feedback you can suggest to implement

One other thing to note that made me want to slightly increase the size of the search_coordination pool is that we only see issues when we push past about 12,000 to 13,000 QPS. In my mind even going from 5 to 8 threads would give us a lot more headroom before we hit this issue in the future but again I could be totally off base.

Any clarification on this @warkolm? Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.