ES is slow when I try to return a huge result set

Hi,

For our use-case, we need to gather all the results that match an ES query,
(their ids and scores only), do some post-processing on the results by and
filter out unnecessary data.

As the post-processing involves merging results from other services
(genome-based, so we can't denormalize the data and store them into
elasticsearch), we need to retrieve all results that match the ES query.

However, on our setup, a query that returns 20000 results, filtered to only
return '_id' and '_score', takes almost 800ms. We are presently using a 1
shard and 3 replica setup on 3 servers.

Is there some way I can optimize this?

James

--

Hello James,

Does it get any faster if you use another protocol, like Thrift instead of
HTTP?

Another idea is to try and parallelize the querying with the
post-processing by using scroll. For example, take 1000 results at a time
and send them to post-processing, then get the next 1000, etc. I assume
this will save you some time - and you can also show a little progress bar
if you need to.

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Mon, Jan 7, 2013 at 6:06 AM, James Howard howardjames184@gmail.comwrote:

Hi,

For our use-case, we need to gather all the results that match an ES
query, (their ids and scores only), do some post-processing on the results
by and filter out unnecessary data.

As the post-processing involves merging results from other services
(genome-based, so we can't denormalize the data and store them into
elasticsearch), we need to retrieve all results that match the ES query.

However, on our setup, a query that returns 20000 results, filtered to
only return '_id' and '_score', takes almost 800ms. We are presently using
a 1 shard and 3 replica setup on 3 servers.

Is there some way I can optimize this?

James

--

--

Wouldn't the network overhead of multiple calls be worse in the end? The
normal issue with large results is memory, which is why scroll is useful.

Protocol will probably give you the biggest savings. Are you using the Java
protocol or HTTP? Do you have source enabled? How long does a "normal"
query (default size of 10) take?

--
Ivan

On Sun, Jan 6, 2013 at 11:51 PM, Radu Gheorghe
radu.gheorghe@sematext.comwrote:

Hello James,

Does it get any faster if you use another protocol, like Thrift instead of
HTTP?

Another idea is to try and parallelize the querying with the
post-processing by using scroll. For example, take 1000 results at a time
and send them to post-processing, then get the next 1000, etc. I assume
this will save you some time - and you can also show a little progress bar
if you need to.

Best regards,
Radu

http://sematext.com/ -- ElasticSearch -- Solr -- Lucene

On Mon, Jan 7, 2013 at 6:06 AM, James Howard howardjames184@gmail.comwrote:

Hi,

For our use-case, we need to gather all the results that match an ES
query, (their ids and scores only), do some post-processing on the results
by and filter out unnecessary data.

As the post-processing involves merging results from other services
(genome-based, so we can't denormalize the data and store them into
elasticsearch), we need to retrieve all results that match the ES query.

However, on our setup, a query that returns 20000 results, filtered to
only return '_id' and '_score', takes almost 800ms. We are presently using
a 1 shard and 3 replica setup on 3 servers.

Is there some way I can optimize this?

James

--

--

--

What time is 800ms, is it the response time, or the time ES reports in the
"took" field?

Note, only 1 shard is very uncommon setup, it makes query responses slow.
If you use more shards (default is 5 for very good reason) you will have
faster responses because query response generation can be executed on many
distributed indices in parallel. Selecting a replica level of 3 is also
very uncommon. Not that adding replica shards only scale at handling query
load in heavy concurrency situation (if the ES cluster is hammered by
thousands of queries per second), but it does not scale for your case of
creating and transporting large response sets.

You did not give an example about your index mapping and queries, so I
can't tell if your queries need optimization. I consider this most
important before tuning other knobs.

After index and query optimization, you could watch for other knobs to
optimize, here are some general hints.

If you measured the overall response time, there may be overhead in the
JSON serialization/deserialization. Which client do you use? Python?
(Python standard JSON is known to be very slow).

If you measured the "took" field, how large is the result set in kB/MB?
What is the number of TCP packets? Use tcpdump to monitor the network. Note
there is latency on the network, which is outside of the scope of ES. You
can use faster NICs (10gbit) or larger network buffering in your protocol
(HTTP) or compression in your data body to minimize latency (ES supports
HTTP compression, it mus be enabled).

If you can't bear the HTTP overhead (headers are always uncompressed and
mixing with compressed data creates some extra work on the wire protocol)
you can go for Thrift (or WebSocket) for gaining around another 15% benefit.

Jörg

--

Jorg, Ivan, Radu,

Thanks for replying to my question.

800ms is the time ES reports in the "took" field.

I'm still testing out different shard/replica set-ups, and thought I'd test
using 1 shard / n-replicas (since I have three machines, so use n=3) to
test the throughput (returning large resultsets, 90k-120k hits per query)
for my use-case.

I am only using a single field "tags" in each document, where tags are
stored as "verb1, verb2, verb3, verb4" and search is performed using
pyes.StringQuery("verb") , is there anything further I can tweak about this?

I've enabled http compression, and response times measured from the
application <-> elasticsearch seems to have improved by a little. The main
bottleneck still appears to be the 800ms ES reports in the "took" field.

Using Chrome dev tools, _search is 340.78KB after gzipped and 6.62MB

I wonder if its worth it to write a plugin specially for this use-case to
see how fast it can go.

On Tuesday, January 8, 2013 1:14:28 AM UTC+8, Jörg Prante wrote:

What time is 800ms, is it the response time, or the time ES reports in the
"took" field?

Note, only 1 shard is very uncommon setup, it makes query responses slow.
If you use more shards (default is 5 for very good reason) you will have
faster responses because query response generation can be executed on many
distributed indices in parallel. Selecting a replica level of 3 is also
very uncommon. Not that adding replica shards only scale at handling query
load in heavy concurrency situation (if the ES cluster is hammered by
thousands of queries per second), but it does not scale for your case of
creating and transporting large response sets.

You did not give an example about your index mapping and queries, so I
can't tell if your queries need optimization. I consider this most
important before tuning other knobs.

After index and query optimization, you could watch for other knobs to
optimize, here are some general hints.

If you measured the overall response time, there may be overhead in the
JSON serialization/deserialization. Which client do you use? Python?
(Python standard JSON is known to be very slow).

If you measured the "took" field, how large is the result set in kB/MB?
What is the number of TCP packets? Use tcpdump to monitor the network. Note
there is latency on the network, which is outside of the scope of ES. You
can use faster NICs (10gbit) or larger network buffering in your protocol
(HTTP) or compression in your data body to minimize latency (ES supports
HTTP compression, it mus be enabled).

If you can't bear the HTTP overhead (headers are always uncompressed and
mixing with compressed data creates some extra work on the wire protocol)
you can go for Thrift (or WebSocket) for gaining around another 15% benefit.

Jörg

--

On Tuesday, January 8, 2013 1:14:28 AM UTC+8, Jörg Prante wrote:

Note, only 1 shard is very uncommon setup, it makes query responses slow.
If you use more shards (default is 5 for very good reason) you will have
faster responses because query response generation can be executed on many
distributed indices in parallel. Selecting a replica level of 3 is also
very uncommon. Not that adding replica shards only scale at handling query
load in heavy concurrency situation (if the ES cluster is hammered by
thousands of queries per second), but it does not scale for your case of
creating and transporting large response sets.

Jorg,

I've just tested a 3-shard, 1-replica set up on my index. ES takes the
almost a same amount of time, with a slight improvement, around 2-5%,
"took" times are still > 600.

Any ideas on how I can further tweak this?

--

James,

ok, I try to understand, 6.62 MB is the uncompressed result set size of a
single query?

If so, 800ms on ES side seems at first high for a response time (typical on
an average server hardware is 5-8 ms), but with such a large result, it's
no surprise, because ES requires time to generate the result.

Is 800ms measured only on a cold index? Or also on a warm index?

You can try to compress the index (compressing transport is ok, but would
help more if 800ms was the total response time)

Do you have stored fields? Is the index compressed on disk?

Also, I don't know your queries. Are they random? Do they repeat? You could
try to buffer your result set, if you have similar queries. This can be
achieved by moving the query to a filtered query.

Mapping the ES process into RAM could improve the situation too, because
your search seems I/O bound. How much RAM and heap is dedicated to ES, how
big is the index on disk?

Would be nice to see a small example of a document and a query so the
challenge can be understood more easily.

Jörg

--

On thing I forgot, do you use a 100Mb/s network or 1Gb/s network connection?

Jörg

--