Query Performance

Hi All,

I am trying to improve my ES query performance. The goal is to get response
times for 3 related queries under a second!. In my test i have seen 90th
percentile response time (took time) for combined 3 queries to be ~1.8
seconds. Here are the details:

Cluster:

  • 5 Machines, 5 Shards, Currently on m3.2xlarge. (Had started with less
    powerful boxes and went up one by one, started from m3.large)
  • 4 indexes.
    • one index with ~90 million recrods (total *19.3 GB *on all shards
      .)
    • one with ~24 million (total 6GB on all shards.)
    • Other two are in 780K and 340K ( total 160MB and 190MB)
  • All fields in the larger indexes are integers.
  • Record size is small-ish.
  • indexes are compressed.
  • I have given 15 GB to ES instances.
  • Indexes are stored on EBS volumes. Each instance has 250GB volume
    with it. (Keeping SSDs as last resort)

The indexes are not changing (for now, in future they would change once a
day). So no indexing is taking place while we query. Therefore, I have
tried things like reducing number of segments in the two larger indexes.
That helped to a point.

Querying Technique:

  • use python ES client.
  • 3 small instance forking 10 threads at the same time.
  • Each thread would fire 3 queries before reporting a time.
  • At time there would be ~100 concurrent queries on the machines. settles
    around ~50-60.
  • I take 'took' time from ES response to measure times.
  • I discard 100 records before measuring times.
  • A total of 5000 unique users are used for which 3 ES queries would be
    fired. A total of 4900 users' times are measured.

Observations:

  • RAM is never under stress. Well below 15 GB allotted.
  • CPU comes under strain, goes upto 85-95 region on all instances during
    the tests.

Queries:

1. On an index with ~24 Million records:

res = es.search( index="index1",
body={"query":{"bool":{"must":[{"term":{"cid":value}}]}}}, sort=[
"source:desc", "cdate:desc" ], size=100, fields=["wiid"], _source="true")

i parse results of these queries to get certain fields out and pass on to
the 2nd query. Lets call those fields as: q1.field1 and q2.field2

2. On an index with ~90 million records:

res1 = es.search(index="index2",
body={"query":{"filtered":{"filter":{"bool":{"must":{"terms":{"col_a":
q1.field1}},"must_not":{"terms":{"col_b":q1.field1
}}}}}},"aggs":{"i2B":{"terms":{"field":"col_b", "size": 1000
,"shard_size":10000, "order" : { "mss.sum":"desc"}
},"aggs":{"mss":{"stats":{"script":"ca = _source.col_a;
index=wiids.indexOf(ca); sval=0; if(index!=-1) sval=svalues.get(index);
else sval=-1; return _source.*col_x**sval; ","params":{"wiids":q1.field1
,"svalues":q1.field2}}},"simSum":{"stats":{"script":"return _source.
col_x "}}}}}}, size=1)

  • it uses filtered query.
  • uses 2 aggregations
  • uses script in aggregation.
  • use shard_size

Again, i parse results and get a filed out. Lets call that field as:
q2.field1

  1. On an index with ~340K records:

res2 = es.search(index="index3", body= { "query" : { "filtered" : {
"query":{ "terms":{ "wiid":q2.field1 } }, "filter" : { "bool" : {
"must" : [ { "range" : {"isInRange": { "gte" : 10 } } } , { "term" : {
"isCondA" : "false" } } , { "term" : { "isCondB" : "false"} }, { "term" : {
"isCondC" : "false" } } ] } } } } } , size=1000)

Please let me know if any other information would help you help me.

Query 2 above is doing aggregations and using a custom script. This is
where times reach few seconds, like 2-3 seconds or even 4+ seconds at
times.

I can move to a high end CPU machine and may be the performance would
improve. Wanted to check if there is anything else that i am missing.

Thanks!
Ravi

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9148ddfb-1a72-49db-b716-f2f9405392e4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello All,

Any help one this please?

Thanks!
Ravi

On Monday, 16 June 2014 12:40:38 UTC+1, ravim...@gmail.com wrote:

Hi All,

I am trying to improve my ES query performance. The goal is to get
response times for 3 related queries under a second!. In my test i have
seen 90th percentile response time (took time) for combined 3 queries
to be ~1.8 seconds. Here are the details:

Cluster:

  • 5 Machines, 5 Shards, Currently on m3.2xlarge. (Had started with less
    powerful boxes and went up one by one, started from m3.large)
  • 4 indexes.
    • one index with ~90 million recrods (total *19.3 GB *on all shards
      .)
    • one with ~24 million (total 6GB on all shards.)
    • Other two are in 780K and 340K ( total 160MB and 190MB)
  • All fields in the larger indexes are integers.
  • Record size is small-ish.
  • indexes are compressed.
  • I have given 15 GB to ES instances.
  • Indexes are stored on EBS volumes. Each instance has 250GB volume
    with it. (Keeping SSDs as last resort)

The indexes are not changing (for now, in future they would change once a
day). So no indexing is taking place while we query. Therefore, I have
tried things like reducing number of segments in the two larger
indexes. That helped to a point.

Querying Technique:

  • use python ES client.
  • 3 small instance forking 10 threads at the same time.
  • Each thread would fire 3 queries before reporting a time.
  • At time there would be ~100 concurrent queries on the machines.
    settles around ~50-60.
  • I take 'took' time from ES response to measure times.
  • I discard 100 records before measuring times.
  • A total of 5000 unique users are used for which 3 ES queries would be
    fired. A total of 4900 users' times are measured.

Observations:

  • RAM is never under stress. Well below 15 GB allotted.
  • CPU comes under strain, goes upto 85-95 region on all instances during
    the tests.

Queries:

1. On an index with ~24 Million records:

res = es.search( index="index1",
body={"query":{"bool":{"must":[{"term":{"cid":value}}]}}}, sort=[
"source:desc", "cdate:desc" ], size=100, fields=["wiid"], _source="true")

i parse results of these queries to get certain fields out and pass on to
the 2nd query. Lets call those fields as: q1.field1 and q2.field2

2. On an index with ~90 million records:

res1 = es.search(index="index2",
body={"query":{"filtered":{"filter":{"bool":{"must":{"terms":{"col_a":
q1.field1}},"must_not":{"terms":{"col_b":q1.field1
}}}}}},"aggs":{"i2B":{"terms":{"field":"col_b", "size": 1000
,"shard_size":10000, "order" : { "mss.sum":"desc"}
},"aggs":{"mss":{"stats":{"script":"ca = _source.col_a;
index=wiids.indexOf(ca); sval=0; if(index!=-1) sval=svalues.get(index);
else sval=-1; return _source.*col_x**sval; ","params":{"wiids":q1.field1
,"svalues":q1.field2}}},"simSum":{"stats":{"script":"return _source.
col_x "}}}}}}, size=1)

  • it uses filtered query.
  • uses 2 aggregations
  • uses script in aggregation.
  • use shard_size

Again, i parse results and get a filed out. Lets call that field as:
q2.field1

  1. On an index with ~340K records:

res2 = es.search(index="index3", body= { "query" : { "filtered" : {
"query":{ "terms":{ "wiid":q2.field1 } }, "filter" : { "bool" : {
"must" : [ { "range" : {"isInRange": { "gte" : 10 } } } , { "term" : {
"isCondA" : "false" } } , { "term" : { "isCondB" : "false"} }, { "term" : {
"isCondC" : "false" } } ] } } } } } , size=1000)

Please let me know if any other information would help you help me.

Query 2 above is doing aggregations and using a custom script. This is
where times reach few seconds, like 2-3 seconds or even 4+ seconds at
times.

I can move to a high end CPU machine and may be the performance would
improve. Wanted to check if there is anything else that i am missing.

Thanks!
Ravi

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/cfbbdbd9-dcc7-4f74-a116-798e5bab750c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Does the response time improve when caches are full ?

Can you try query without sort and see of things get better ?

I found that sorting in ES is not good idea sometimes.

Georgi

On Monday, June 16, 2014 1:40:38 PM UTC+2, ravim...@gmail.com wrote:

Hi All,

I am trying to improve my ES query performance. The goal is to get
response times for 3 related queries under a second!. In my test i have
seen 90th percentile response time (took time) for combined 3 queries
to be ~1.8 seconds. Here are the details:

Cluster:

  • 5 Machines, 5 Shards, Currently on m3.2xlarge. (Had started with less
    powerful boxes and went up one by one, started from m3.large)
  • 4 indexes.
    • one index with ~90 million recrods (total *19.3 GB *on all shards
      .)
    • one with ~24 million (total 6GB on all shards.)
    • Other two are in 780K and 340K ( total 160MB and 190MB)
  • All fields in the larger indexes are integers.
  • Record size is small-ish.
  • indexes are compressed.
  • I have given 15 GB to ES instances.
  • Indexes are stored on EBS volumes. Each instance has 250GB volume
    with it. (Keeping SSDs as last resort)

The indexes are not changing (for now, in future they would change once a
day). So no indexing is taking place while we query. Therefore, I have
tried things like reducing number of segments in the two larger
indexes. That helped to a point.

Querying Technique:

  • use python ES client.
  • 3 small instance forking 10 threads at the same time.
  • Each thread would fire 3 queries before reporting a time.
  • At time there would be ~100 concurrent queries on the machines.
    settles around ~50-60.
  • I take 'took' time from ES response to measure times.
  • I discard 100 records before measuring times.
  • A total of 5000 unique users are used for which 3 ES queries would be
    fired. A total of 4900 users' times are measured.

Observations:

  • RAM is never under stress. Well below 15 GB allotted.
  • CPU comes under strain, goes upto 85-95 region on all instances during
    the tests.

Queries:

1. On an index with ~24 Million records:

res = es.search( index="index1",
body={"query":{"bool":{"must":[{"term":{"cid":value}}]}}}, sort=[
"source:desc", "cdate:desc" ], size=100, fields=["wiid"], _source="true")

i parse results of these queries to get certain fields out and pass on to
the 2nd query. Lets call those fields as: q1.field1 and q2.field2

2. On an index with ~90 million records:

res1 = es.search(index="index2",
body={"query":{"filtered":{"filter":{"bool":{"must":{"terms":{"col_a":
q1.field1}},"must_not":{"terms":{"col_b":q1.field1
}}}}}},"aggs":{"i2B":{"terms":{"field":"col_b", "size": 1000
,"shard_size":10000, "order" : { "mss.sum":"desc"}
},"aggs":{"mss":{"stats":{"script":"ca = _source.col_a;
index=wiids.indexOf(ca); sval=0; if(index!=-1) sval=svalues.get(index);
else sval=-1; return _source.*col_x**sval; ","params":{"wiids":q1.field1
,"svalues":q1.field2}}},"simSum":{"stats":{"script":"return _source.
col_x "}}}}}}, size=1)

  • it uses filtered query.
  • uses 2 aggregations
  • uses script in aggregation.
  • use shard_size

Again, i parse results and get a filed out. Lets call that field as:
q2.field1

  1. On an index with ~340K records:

res2 = es.search(index="index3", body= { "query" : { "filtered" : {
"query":{ "terms":{ "wiid":q2.field1 } }, "filter" : { "bool" : {
"must" : [ { "range" : {"isInRange": { "gte" : 10 } } } , { "term" : {
"isCondA" : "false" } } , { "term" : { "isCondB" : "false"} }, { "term" : {
"isCondC" : "false" } } ] } } } } } , size=1000)

Please let me know if any other information would help you help me.

Query 2 above is doing aggregations and using a custom script. This is
where times reach few seconds, like 2-3 seconds or even 4+ seconds at
times.

I can move to a high end CPU machine and may be the performance would
improve. Wanted to check if there is anything else that i am missing.

Thanks!
Ravi

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ea0b3e2b-a76d-417b-99e3-db927355a2bb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

For the first query, since you don't care about the _score, move the bool
query into a filter. If you only need field1 and field2 and your _source is
big, might be able to save some network payload using source filtering only
for those 2 fields.

For the second query, if you have a lot RAM and say col_a and col_b are not
big values (long strings) and not high cardinality, you can try to switch
all _source.col_a (or _source.blah) to doc['col_a'].value in your scripts.
This syntax will load the field values into memory and should perform
faster than _source.blah. And your last stats agg (simSum), not sure why
that needs to be a script - can it just be a stats-field on col_x? Also if
the second query does not need to return hits (i.e. you only need info from
the aggs), you can set search_type=count to further optimize it.

For the third query, if you don't care about _score, move the query part
into the filter part.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/38d33be0-286d-4dc3-95e3-8b3fadb7f4df%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Georgi,

Thanks for your response. I clear the caches before each test i.e. a test
for 5000 unique ids. During the test period, cache size reaches 2.5-3 gb
for filter cache and 120+ mb for field cache.

The response time (90th percentile) for query 1 is about 85-100
milliseconds. The max i see is about 375 milliseconds. Ideally i should
avoid sorting but in this case i need this sorting. And given the times are
mostly under 100 ms, i guess that sort is fine.

Please let me know if you think otherwise.

Thanks!
Ravi

On Tuesday, 17 June 2014 15:06:49 UTC+1, Georgi Ivanov wrote:

Does the response time improve when caches are full ?

Can you try query without sort and see of things get better ?

I found that sorting in ES is not good idea sometimes.

Georgi

On Monday, June 16, 2014 1:40:38 PM UTC+2, ravim...@gmail.com wrote:

Hi All,

I am trying to improve my ES query performance. The goal is to get
response times for 3 related queries under a second!. In my test i have
seen 90th percentile response time (took time) for combined 3 queries
to be ~1.8 seconds. Here are the details:

Cluster:

  • 5 Machines, 5 Shards, Currently on m3.2xlarge. (Had started with less
    powerful boxes and went up one by one, started from m3.large)
  • 4 indexes.
    • one index with ~90 million recrods (total 19.3 GB on all
      shards
      .
      )
    • one with ~24 million (total 6GB on all shards.)
    • Other two are in 780K and 340K ( total 160MB and 190MB)
  • All fields in the larger indexes are integers.
  • Record size is small-ish.
  • indexes are compressed.
  • I have given 15 GB to ES instances.
  • Indexes are stored on EBS volumes. Each instance has 250GB volume
    with it. (Keeping SSDs as last resort)

The indexes are not changing (for now, in future they would change once a
day). So no indexing is taking place while we query. Therefore, I have
tried things like reducing number of segments in the two larger
indexes. That helped to a point.

Querying Technique:

  • use python ES client.
  • 3 small instance forking 10 threads at the same time.
  • Each thread would fire 3 queries before reporting a time.
  • At time there would be ~100 concurrent queries on the machines.
    settles around ~50-60.
  • I take 'took' time from ES response to measure times.
  • I discard 100 records before measuring times.
  • A total of 5000 unique users are used for which 3 ES queries would
    be fired. A total of 4900 users' times are measured.

Observations:

  • RAM is never under stress. Well below 15 GB allotted.
  • CPU comes under strain, goes upto 85-95 region on all instances during
    the tests.

Queries:

1. On an index with ~24 Million records:

res = es.search( index="index1",
body={"query":{"bool":{"must":[{"term":{"cid":value}}]}}}, sort=[
"source:desc", "cdate:desc" ], size=100, fields=["wiid"], _source="true")

i parse results of these queries to get certain fields out and pass on to
the 2nd query. Lets call those fields as: q1.field1 and q2.field2

2. On an index with ~90 million records:

res1 = es.search(index="index2",
body={"query":{"filtered":{"filter":{"bool":{"must":{"terms":{"col_a":
q1.field1}},"must_not":{"terms":{"col_b":q1.field1
}}}}}},"aggs":{"i2B":{"terms":{"field":"col_b", "size": 1000
,"shard_size":10000, "order" : { "mss.sum":"desc"}
},"aggs":{"mss":{"stats":{"script":"ca = _source.col_a;
index=wiids.indexOf(ca); sval=0; if(index!=-1) sval=svalues.get(index);
else sval=-1; return _source.*col_x**sval; ","params":{"wiids":
q1.field1,"svalues":q1.field2}}},"simSum":{"stats":{"script":"return
_source.col_x "}}}}}}, size=1)

  • it uses filtered query.
  • uses 2 aggregations
  • uses script in aggregation.
  • use shard_size

Again, i parse results and get a filed out. Lets call that field as:
q2.field1

  1. On an index with ~340K records:

res2 = es.search(index="index3", body= { "query" : { "filtered" : {
"query":{ "terms":{ "wiid":q2.field1 } }, "filter" : { "bool" : {
"must" : [ { "range" : {"isInRange": { "gte" : 10 } } } , { "term" : {
"isCondA" : "false" } } , { "term" : { "isCondB" : "false"} }, { "term" : {
"isCondC" : "false" } } ] } } } } } , size=1000)

Please let me know if any other information would help you help me.

Query 2 above is doing aggregations and using a custom script. This is
where times reach few seconds, like 2-3 seconds or even 4+ seconds at
times.

I can move to a high end CPU machine and may be the performance would
improve. Wanted to check if there is anything else that i am missing.

Thanks!
Ravi

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/0eb83736-b4e9-476c-adb2-87991dfaa313%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Binh,

thanks for helping.

My record size for 1st query is 4 fields. 3 of them integers and a date. so
the _source is not big enough to raise concerns. I will anyways try your
suggestion and report any improvements here.

For the 2nd query: i have 15gb of RAM. only 20% of which gets utilised
during the tests. Thanks for all three suggestion, Will definitely try that
and come back here. Good catch for using script in simSum, thanks. I need
just the sum of that field, which does not need a script. Will change that
and see what happens.

For the 3rd query, i do not care about the _score of returned values. Will
give that a try as well.

Thanks a lot.

Ravi

On Tuesday, 17 June 2014 15:28:21 UTC+1, Binh Ly wrote:

For the first query, since you don't care about the _score, move the bool
query into a filter. If you only need field1 and field2 and your _source is
big, might be able to save some network payload using source filtering only
for those 2 fields.

For the second query, if you have a lot RAM and say col_a and col_b are
not big values (long strings) and not high cardinality, you can try to
switch all _source.col_a (or _source.blah) to doc['col_a'].value in your
scripts. This syntax will load the field values into memory and should
perform faster than _source.blah. And your last stats agg (simSum), not
sure why that needs to be a script - can it just be a stats-field on col_x?
Also if the second query does not need to return hits (i.e. you only need
info from the aggs), you can set search_type=count to further optimize it.

For the third query, if you don't care about _score, move the query part
into the filter part.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1d6e9de2-a063-4e41-8068-87f195faaf87%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Binh,

Did some tests and here are the findings:

Moving to c3.4xlarge reduces time by 300 ms. So that takes overall 90th
percentile down to ~1.5 seconds. CPU still in high 80s-90s.

Making all queries filtered and removing script from 2nd queries' 2nd
aggregation reduced CPU footprint (high 50s-60s) and improved overall
timings by close to 200 ms. I am at ~1.3 seconds for all 3 queries.

I guess only next steps now is to play with shard size? or more machines?

Thanks!
Ravi

On Tuesday, 17 June 2014 15:52:31 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

thanks for helping.

My record size for 1st query is 4 fields. 3 of them integers and a date.
so the _source is not big enough to raise concerns. I will anyways try your
suggestion and report any improvements here.

For the 2nd query: i have 15gb of RAM. only 20% of which gets utilised
during the tests. Thanks for all three suggestion, Will definitely try that
and come back here. Good catch for using script in simSum, thanks. I need
just the sum of that field, which does not need a script. Will change that
and see what happens.

For the 3rd query, i do not care about the _score of returned values. Will
give that a try as well.

Thanks a lot.

Ravi

On Tuesday, 17 June 2014 15:28:21 UTC+1, Binh Ly wrote:

For the first query, since you don't care about the _score, move the bool
query into a filter. If you only need field1 and field2 and your _source is
big, might be able to save some network payload using source filtering only
for those 2 fields.

For the second query, if you have a lot RAM and say col_a and col_b are
not big values (long strings) and not high cardinality, you can try to
switch all _source.col_a (or _source.blah) to doc['col_a'].value in your
scripts. This syntax will load the field values into memory and should
perform faster than _source.blah. And your last stats agg (simSum), not
sure why that needs to be a script - can it just be a stats-field on col_x?
Also if the second query does not need to return hits (i.e. you only need
info from the aggs), you can set search_type=count to further optimize it.

For the third query, if you don't care about _score, move the query part
into the filter part.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a3d880d3-5b33-465a-bb60-3245b7b87854%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

btw, changing search_type to count did not have much impact on the timings.

On Tuesday, 17 June 2014 18:19:40 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

Did some tests and here are the findings:

Moving to c3.4xlarge reduces time by 300 ms. So that takes overall 90th
percentile down to ~1.5 seconds. CPU still in high 80s-90s.

Making all queries filtered and removing script from 2nd queries' 2nd
aggregation reduced CPU footprint (high 50s-60s) and improved overall
timings by close to 200 ms. I am at ~1.3 seconds for all 3 queries.

I guess only next steps now is to play with shard size? or more machines?

Thanks!
Ravi

On Tuesday, 17 June 2014 15:52:31 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

thanks for helping.

My record size for 1st query is 4 fields. 3 of them integers and a date.
so the _source is not big enough to raise concerns. I will anyways try your
suggestion and report any improvements here.

For the 2nd query: i have 15gb of RAM. only 20% of which gets utilised
during the tests. Thanks for all three suggestion, Will definitely try that
and come back here. Good catch for using script in simSum, thanks. I need
just the sum of that field, which does not need a script. Will change that
and see what happens.

For the 3rd query, i do not care about the _score of returned values.
Will give that a try as well.

Thanks a lot.

Ravi

On Tuesday, 17 June 2014 15:28:21 UTC+1, Binh Ly wrote:

For the first query, since you don't care about the _score, move the
bool query into a filter. If you only need field1 and field2 and your
_source is big, might be able to save some network payload using source
filtering only for those 2 fields.

For the second query, if you have a lot RAM and say col_a and col_b are
not big values (long strings) and not high cardinality, you can try to
switch all _source.col_a (or _source.blah) to doc['col_a'].value in your
scripts. This syntax will load the field values into memory and should
perform faster than _source.blah. And your last stats agg (simSum), not
sure why that needs to be a script - can it just be a stats-field on col_x?
Also if the second query does not need to return hits (i.e. you only need
info from the aggs), you can set search_type=count to further optimize it.

For the third query, if you don't care about _score, move the query part
into the filter part.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7c1c1e05-89f8-41ca-a2ac-6019af325ef9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello All,

As per continued experimentation, i changed

*indices.cache.filter.size *from default 20% to 30% on all of my boxes.

I can now see increased memory usage and i see increased cache usage. I see
my cache jumped from 2.9GB to 4.4 GB which is accurate as allocated RAM is
15GB.

Even though RAM usage has increased on all machines, i do not see any
performance improvement. How is that possible? Any clues as to what i might
be doing wrong here?

Thanks!
Ravi

On Wednesday, 18 June 2014 11:18:57 UTC+1, ravim...@gmail.com wrote:

btw, changing search_type to count did not have much impact on the
timings.

On Tuesday, 17 June 2014 18:19:40 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

Did some tests and here are the findings:

Moving to c3.4xlarge reduces time by 300 ms. So that takes overall 90th
percentile down to ~1.5 seconds. CPU still in high 80s-90s.

Making all queries filtered and removing script from 2nd queries' 2nd
aggregation reduced CPU footprint (high 50s-60s) and improved overall
timings by close to 200 ms. I am at ~1.3 seconds for all 3 queries.

I guess only next steps now is to play with shard size? or more machines?

Thanks!
Ravi

On Tuesday, 17 June 2014 15:52:31 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

thanks for helping.

My record size for 1st query is 4 fields. 3 of them integers and a date.
so the _source is not big enough to raise concerns. I will anyways try your
suggestion and report any improvements here.

For the 2nd query: i have 15gb of RAM. only 20% of which gets utilised
during the tests. Thanks for all three suggestion, Will definitely try that
and come back here. Good catch for using script in simSum, thanks. I need
just the sum of that field, which does not need a script. Will change that
and see what happens.

For the 3rd query, i do not care about the _score of returned values.
Will give that a try as well.

Thanks a lot.

Ravi

On Tuesday, 17 June 2014 15:28:21 UTC+1, Binh Ly wrote:

For the first query, since you don't care about the _score, move the
bool query into a filter. If you only need field1 and field2 and your
_source is big, might be able to save some network payload using source
filtering only for those 2 fields.

For the second query, if you have a lot RAM and say col_a and col_b are
not big values (long strings) and not high cardinality, you can try to
switch all _source.col_a (or _source.blah) to doc['col_a'].value in your
scripts. This syntax will load the field values into memory and should
perform faster than _source.blah. And your last stats agg (simSum), not
sure why that needs to be a script - can it just be a stats-field on col_x?
Also if the second query does not need to return hits (i.e. you only need
info from the aggs), you can set search_type=count to further optimize it.

For the third query, if you don't care about _score, move the query
part into the filter part.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f1619774-75a5-49fa-bf17-476b8bbf17f6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi All,

One thing i forgot to mention is that my 3rd query, which takes input from
the 2nd query, gets close to 500-1000 values from 2nd query. So the *terms *query
gets 500-100 values. The 90th percentile for the third query comes out to
be ~350 ms.

Thanks!
Ravi

On Wednesday, 18 June 2014 17:49:45 UTC+1, ravim...@gmail.com wrote:

Hello All,

As per continued experimentation, i changed

*indices.cache.filter.size *from default 20% to 30% on all of my boxes.

I can now see increased memory usage and i see increased cache usage. I
see my cache jumped from 2.9GB to 4.4 GB which is accurate as allocated RAM
is 15GB.

Even though RAM usage has increased on all machines, i do not see any
performance improvement. How is that possible? Any clues as to what i might
be doing wrong here?

Thanks!
Ravi

On Wednesday, 18 June 2014 11:18:57 UTC+1, ravim...@gmail.com wrote:

btw, changing search_type to count did not have much impact on the
timings.

On Tuesday, 17 June 2014 18:19:40 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

Did some tests and here are the findings:

Moving to c3.4xlarge reduces time by 300 ms. So that takes overall 90th
percentile down to ~1.5 seconds. CPU still in high 80s-90s.

Making all queries filtered and removing script from 2nd queries' 2nd
aggregation reduced CPU footprint (high 50s-60s) and improved overall
timings by close to 200 ms. I am at ~1.3 seconds for all 3 queries.

I guess only next steps now is to play with shard size? or more
machines?

Thanks!
Ravi

On Tuesday, 17 June 2014 15:52:31 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

thanks for helping.

My record size for 1st query is 4 fields. 3 of them integers and a
date. so the _source is not big enough to raise concerns. I will anyways
try your suggestion and report any improvements here.

For the 2nd query: i have 15gb of RAM. only 20% of which gets utilised
during the tests. Thanks for all three suggestion, Will definitely try that
and come back here. Good catch for using script in simSum, thanks. I need
just the sum of that field, which does not need a script. Will change that
and see what happens.

For the 3rd query, i do not care about the _score of returned values.
Will give that a try as well.

Thanks a lot.

Ravi

On Tuesday, 17 June 2014 15:28:21 UTC+1, Binh Ly wrote:

For the first query, since you don't care about the _score, move the
bool query into a filter. If you only need field1 and field2 and your
_source is big, might be able to save some network payload using source
filtering only for those 2 fields.

For the second query, if you have a lot RAM and say col_a and col_b
are not big values (long strings) and not high cardinality, you can try to
switch all _source.col_a (or _source.blah) to doc['col_a'].value in your
scripts. This syntax will load the field values into memory and should
perform faster than _source.blah. And your last stats agg (simSum), not
sure why that needs to be a script - can it just be a stats-field on col_x?
Also if the second query does not need to return hits (i.e. you only need
info from the aggs), you can set search_type=count to further optimize it.

For the third query, if you don't care about _score, move the query
part into the filter part.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/717e6183-37e7-4542-8371-a8f35382db32%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello,

Can you isolate your slow queries and check if they are slow even when
running them independently ? Check how many documents are matched by theses
queries, if they are millions that would explain.

Also you are using a Terms filter with hundreds of entries. If theses
entries are different for each query, you may want to set the filter
"execution" to "bool" (or "fielddata" ?) to cache the terms individually
rather than just the combination of them : { "terms" : { "execution":
"bool", ... } }

Cédric Hourcade
ced@wal.fr

Le jeudi 19 juin 2014 15:12:43 UTC+2, ravim...@gmail.com a écrit :

Hi All,

One thing i forgot to mention is that my 3rd query, which takes input from
the 2nd query, gets close to 500-1000 values from 2nd query. So the *terms
*query gets 500-100 values. The 90th percentile for the third query comes
out to be ~350 ms.

Thanks!
Ravi

On Wednesday, 18 June 2014 17:49:45 UTC+1, ravim...@gmail.com wrote:

Hello All,

As per continued experimentation, i changed

*indices.cache.filter.size *from default 20% to 30% on all of my boxes.

I can now see increased memory usage and i see increased cache usage. I
see my cache jumped from 2.9GB to 4.4 GB which is accurate as allocated RAM
is 15GB.

Even though RAM usage has increased on all machines, i do not see any
performance improvement. How is that possible? Any clues as to what i might
be doing wrong here?

Thanks!
Ravi

On Wednesday, 18 June 2014 11:18:57 UTC+1, ravim...@gmail.com wrote:

btw, changing search_type to count did not have much impact on the
timings.

On Tuesday, 17 June 2014 18:19:40 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

Did some tests and here are the findings:

Moving to c3.4xlarge reduces time by 300 ms. So that takes overall 90th
percentile down to ~1.5 seconds. CPU still in high 80s-90s.

Making all queries filtered and removing script from 2nd queries' 2nd
aggregation reduced CPU footprint (high 50s-60s) and improved overall
timings by close to 200 ms. I am at ~1.3 seconds for all 3 queries.

I guess only next steps now is to play with shard size? or more
machines?

Thanks!
Ravi

On Tuesday, 17 June 2014 15:52:31 UTC+1, ravim...@gmail.com wrote:

Hi Binh,

thanks for helping.

My record size for 1st query is 4 fields. 3 of them integers and a
date. so the _source is not big enough to raise concerns. I will anyways
try your suggestion and report any improvements here.

For the 2nd query: i have 15gb of RAM. only 20% of which gets utilised
during the tests. Thanks for all three suggestion, Will definitely try that
and come back here. Good catch for using script in simSum, thanks. I need
just the sum of that field, which does not need a script. Will change that
and see what happens.

For the 3rd query, i do not care about the _score of returned values.
Will give that a try as well.

Thanks a lot.

Ravi

On Tuesday, 17 June 2014 15:28:21 UTC+1, Binh Ly wrote:

For the first query, since you don't care about the _score, move the
bool query into a filter. If you only need field1 and field2 and your
_source is big, might be able to save some network payload using source
filtering only for those 2 fields.

For the second query, if you have a lot RAM and say col_a and col_b
are not big values (long strings) and not high cardinality, you can try to
switch all _source.col_a (or _source.blah) to doc['col_a'].value in your
scripts. This syntax will load the field values into memory and should
perform faster than _source.blah. And your last stats agg (simSum), not
sure why that needs to be a script - can it just be a stats-field on col_x?
Also if the second query does not need to return hits (i.e. you only need
info from the aggs), you can set search_type=count to further optimize it.

For the third query, if you don't care about _score, move the query
part into the filter part.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/99179994-33be-4753-8dd9-8c3db1fc5be5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.