Estimating max. search throughput that can be achieved from a cluster

Hello Folks,

I'm trying to compute the max. search throughput I can achieve from a given cluster.

Following is my cluster settings

  1. 27 data nodes(Mem:384GB and 48 vCPUs) each having 17 shards with 0 replica attached to EBS volume.
  2. I'll be performing only script_score based exact-KNN queries, vectors will be pre-filtered with a key and KNN is performed on the filtered docs.
    2.1 99% of keys has less than ~18K documents. So, at max for a search query I am expecting ~18K docs of 8KB size to be pulled for KNN.
    2.2 I'm expecting the service time to be 300ms per search query at no load starting cold without warm up.
  3. request_cache, query_cache and fielddata_cache are not used.

My calculation is as follows

In total, I have 1971(73 threads * 27nodes) threads in the search thread-pool. Each search request will hit all the shards and hence ~27 threads(1 in each node) will be active for 300ms. Then, at a given time, my cluster can handle 73 search requests. So, the search throughput from the cluster is 73 * 1000ms/300ms = ~219 search/sec.

Is my above calculation a good estimate for the search throughput? Am I missing any factors that I should be including in my calculations?
I am not sure about how to account for the time a request waits in a node's queue. If you have any thoughts please share.

Please let me know if you need more info. from my side.

Thanks for your time.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.