Not able to hit target throughput

Hi, I have a benchmark which kinda looks like this:

{
        "name": "standard-benchmark",
        "description": "Measure ES performance in standard use cases",
        "default": true,
        "schedule": [
        {
            "operation": "index-all",
            "clients": 8,
            "warmup-time-period": 30,
            "schedule": "deterministic",
            "target-throughput": 14000
        },
        {
            "operation": "force-merge"
        },
        {
            "parallel": {
            "warmup-time-period": 30,
            "time-period": 1800,
            "clients":50,
            "tasks": [
                {
                "operation": "index-incremental",
                "clients": 5,
                "schedule": "poisson",
                "target-throughput": 10
                },
                {
                "operation": "custom-query",
                "clients": 45,
                "schedule": "poisson",
                "target-throughput": 150
                }
            ]
            }
        }
        ]
    }

The custom-query operation is a implemented in track.py. No matter what combination of clients I try (and I have tried a lot) I am not able to hit the target throughput of 150 for custom-query. The Max Throughput I have got in the summary is ~100. I thought I was not generating enough load so even tried 2 load driver machines, and I am still am not able to hit the target. Are there any other parameters I can use to tune the target throughput ?

What is the latency of a query? Do you need more clients? What does the Elasticsearch cluster look like during the benchmark with respect to CPU usage, GC and disk I/O?

The query latency which comes up is really high (in the order of 15+mins). I tried more clients, till it crashed. On a single machine around 13 clients is the max I could use. I have not monitored my cluster during the process, but thats something ill try

I would recommend configuring a metrics store. This will allow you get a record for each request and allow to to analyse this in Kibana.

I would also recommend installing X-Pack Monitoring so you can get better insights into what is limiting performance. maybe your cluster is simply not powerful enough to handle the load and query mix you want to throw at it.

What is the specification of your Elasticsearch cluster? What type of queries are you running?

I would also recommend starting with a lower concurrency level and target throughput and gradually increase to see how latencies change with load.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.