I have config track.json like this
{
"operation": "query-platform",
"clients": 8,
"warmup-iterations": 5,
"iterations": 125,
"target-throughput": 100
}
after finished the benchmark I got below result
| All | Min Throughput | query-platform | 9.86 | ops/s |
| All | Median Throughput | query-platform | 10.95 | ops/s |
| All | Max Throughput | query-platform | 11.22 | ops/s |
Does it mean that the elasticsearch cannot reach the target throughtput 100 ops/s and the maximum reachable throughput is 11.22 ops/s? Am I right?
From Track reference I know
i.e. if you just run 5 iterations you will not get a 99.9th percentile because we need at least 1000 iterations to determine this value precisely
So 8 clients each iterations 125 ==> 8*125=1000 in this case below metrics is precisely?
| All | 50th percentile latency | query-platform | 43801.2 | ms |
| All | 90th percentile latency | query-platform | 74826.3 | ms |
| All | 99th percentile latency | query-platform | 81474.5 | ms |
| All | 99.9th percentile latency | query-platform | 82157.8 | ms |
| All | 100th percentile latency | query-platform | 82186.7 | ms |
| All | 50th percentile service time | query-platform | 712.296 | ms |
| All | 90th percentile service time | query-platform | 827.655 | ms |
| All | 99th percentile service time | query-platform | 923.236 | ms |
| All | 99.9th percentile service time | query-platform | 999.587 | ms |
| All | 100th percentile service time | query-platform | 1015.55 | ms |
And from metrics reference I know
latency
: Time period between submission of a request and receiving the complete response. It also includes wait time, i.e. the time the request spends waiting until it is ready to be serviced by Elasticsearch.
What is the wait time mean? the request first in some internal queue of elasticsearch then elasticsearch consume each to process so wait time = time in internal queue of elasticsearch, am I right? Should I care it or just only pay attention to throughtput is ok?