Replica shards with a negative impact in percolation

I'm not sure if this is a bug or a misunderstood feature of percolation but over the last few months I've been trying to optimize a percolation solution, based on a percolator index with roughly 18200 queries.

I know you're supposed to use many small shards to improve the percolation throughput, but what I noticed is that using replica shards had no positive effect (unlike for regular searches), and if it had any effect it seemed to be negative.

My benchmark setup is an 8 node cluster, where each node has 16 Intel CPU cores, 64 G RAM and Elasticsearch 6.2.2 installed. I've previously done the same tests with Elasticsearch 5.6.4 and got the same results.

For benchmarking I run a simple script (of our own make) which loads json-documents from the file system, run the percolation in batches of 25 via a curl command, checks the result and logs how much time it took from the percolation request was sent to Elasticsearch until the result came back. I can run this test in parallel but for the current problem I only ran it in serial mode, with one 25-document batch at a time. A test run goes through thousands of json files and at the end the script computes the average Documents Per Second (DPS) handled by the percolator index.

In the most recent tests I built the percolator index with just 4 primary shards and 0 replica shards. This meant that only half the nodes had data (shards) to percolate against and when I ran the benchmark script I could see this in Kibana: Only the four nodes with a primary shard did any work (the other four nodes had CPU usage near 0). At the end I got the result 1.067 (a little more than 1 document per second).

Now, for the next run I added 1 replica so that the percolator index had 4 primary and 4 replica shards. i.e. 1 shard for each node in the cluster. Naively I would have expected the replica shards to pull the same workload as the primary shards, thus doubling the DPS. And when I started the exact same benchmark test I could see in Kibana that every node had roughly the same CPU usage, so the replica shards were involved in the percolation just like the primary shards. However, I still ended up with a DPS of 1.033 or 3% lower!

So, despite one shard per node, the second test got a lower throughput than the first run which only involved 4 of the 8 data nodes! To me this is really counterintuitive.

One final piece of the puzzle: When I re-created the percolator index with the same 18200 queries but with 8 primary shards and 0 replicas, which superficially looks like the previous test - i.e. with one shard per node - I did get a doubling of DPS (to 2.003). So when it comes to primary shards they do seem to carry their workload equally, unlike the replica shards.

So to my question: Is this a bug or aren't you supposed to use replica shards in a percolator index?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.