Any limitations with distributed load-drivers?

Hi Johan,

One option to stress Rally is to "simulate" Elasticsearch's _bulk endpoint with a static response, e.g. by using nginx.

You need to install the "more-include headers" module and then you can do something like this in your nginx config:

server {
        listen 19200 default_server;
        listen [::]:19200 default_server;

        default_type  application/json;

        root /var/www/html;

        index index.json

        server_name _;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }

        location /_bulk {
            if ($request_method = POST ) {
                more_set_headers "Content-Type: application/json; charset=UTF-8";
                  return 200 '{"took":514,"errors":false,"items":[{"index":{"_index":"myindex","_type":"mytype","_id":"1","_version":1,"_shards":{"total":1,"successful":1,"failed":0},"created":true,"status":201}}]}';
            }
        }
}

However, Rally is doing a few more operations (cluster health check, create index etc.) and your simplest option is probably to use nginx as a reverse proxy for one of your Elasticsearch nodes for all operations but _bulk (but I don't have a config snippet ready for this).

The second thing that you could try, is to run Rally with --enable-driver-profiling(see docs) but the better option for you is probably to stub out the _bulk endpoint.

Daniel

1 Like