I'm running some tests on Elastic Search to see if it's performant in our
particular situation.
My queries average ~130ms, according to the 'took' field returned by
Elastic Search. But when I time the query round trip, I'm getting ~820ms.
What might be causing this discrepancy? Am I misunderstanding the 'took'
field?
My setup:
elastic search 0.19.0
amazon ec2 m2.xlarge box running Centos
talking to elastic search via python pyes package, e.g.:
The time reported by elasticsearch in the "took" field is the time that it
took elasticsearch to process the query on its side. It doesn't include
serializing the request into JSON on the client
sending the request over the network
deserializing the request from JSON on the server
serializing the response into JSON on the server
sending the response over the network
deserializing the response from JSON on the client
On Wednesday, November 28, 2012 3:06:08 PM UTC-5, Daniel Weitzenfeld wrote:
I'm running some tests on Elastic Search to see if it's performant in our
particular situation.
My queries average ~130ms, according to the 'took' field returned by
Elastic Search. But when I time the query round trip, I'm getting ~820ms.
What might be causing this discrepancy? Am I misunderstanding the 'took'
field?
My setup:
Elasticsearch 0.19.0
amazon ec2 m2.xlarge box running Centos
talking to Elasticsearch via python pyes package, e.g.:
Got it, thanks. I'm going to try Thrift and faster JSON serializers.
On Thursday, November 29, 2012 6:27:22 AM UTC-5, Igor Motov wrote:
The time reported by elasticsearch in the "took" field is the time that it
took elasticsearch to process the query on its side. It doesn't include
serializing the request into JSON on the client
sending the request over the network
deserializing the request from JSON on the server
serializing the response into JSON on the server
sending the response over the network
deserializing the response from JSON on the client
On Wednesday, November 28, 2012 3:06:08 PM UTC-5, Daniel Weitzenfeld wrote:
I'm running some tests on Elastic Search to see if it's performant in our
particular situation.
My queries average ~130ms, according to the 'took' field returned by
Elastic Search. But when I time the query round trip, I'm getting ~820ms.
What might be causing this discrepancy? Am I misunderstanding the 'took'
field?
My setup:
Elasticsearch 0.19.0
amazon ec2 m2.xlarge box running Centos
talking to Elasticsearch via python pyes package, e.g.:
Switching to Thrift shaved off a half a second per query. Awesome.
On Thursday, November 29, 2012 9:20:26 AM UTC-5, Daniel Weitzenfeld wrote:
Got it, thanks. I'm going to try Thrift and faster JSON serializers.
On Thursday, November 29, 2012 6:27:22 AM UTC-5, Igor Motov wrote:
The time reported by elasticsearch in the "took" field is the time that
it took elasticsearch to process the query on its side. It doesn't include
serializing the request into JSON on the client
sending the request over the network
deserializing the request from JSON on the server
serializing the response into JSON on the server
sending the response over the network
deserializing the response from JSON on the client
On Wednesday, November 28, 2012 3:06:08 PM UTC-5, Daniel Weitzenfeld
wrote:
I'm running some tests on Elastic Search to see if it's performant in
our particular situation.
My queries average ~130ms, according to the 'took' field returned by
Elastic Search. But when I time the query round trip, I'm getting ~820ms.
What might be causing this discrepancy? Am I misunderstanding the 'took'
field?
My setup:
Elasticsearch 0.19.0
amazon ec2 m2.xlarge box running Centos
talking to Elasticsearch via python pyes package, e.g.:
I know this is an old thread, but it seemed like the right place for this.
Does 'took' include the time that the request sat in the queue before a
thread from the pool was available? Is this the best "response time"
measurement to use for assessing your maximum shard size?
On Thursday, November 29, 2012 6:27:22 AM UTC-5, Igor Motov wrote:
The time reported by elasticsearch in the "took" field is the time that it
took elasticsearch to process the query on its side. It doesn't include
serializing the request into JSON on the client
sending the request over the network
deserializing the request from JSON on the server
serializing the response into JSON on the server
sending the response over the network
deserializing the response from JSON on the client
On Wednesday, November 28, 2012 3:06:08 PM UTC-5, Daniel Weitzenfeld wrote:
I'm running some tests on Elastic Search to see if it's performant in our
particular situation.
My queries average ~130ms, according to the 'took' field returned by
Elastic Search. But when I time the query round trip, I'm getting ~820ms.
What might be causing this discrepancy? Am I misunderstanding the 'took'
field?
My setup:
Elasticsearch 0.19.0
amazon ec2 m2.xlarge box running Centos
talking to Elasticsearch via python pyes package, e.g.:
It measures "wall time" of query execution in the transport level of
elasticsearch, which includes any queue waiting time. It basically
everything after request from deserialized from JSON and before the
response serialized into JSON again. The idea behind assessing the maximum
shard size is to load this shard until you have unacceptably slow
performance. If you measure performance in terms of time reported in the
'took' parameter then you can use it. If you measure performance in terms
of end-to-end response time, then you need to measure end-to-end response
time.
On Tuesday, July 23, 2013 9:40:49 AM UTC-4, shadow000fire wrote:
I know this is an old thread, but it seemed like the right place for
this. Does 'took' include the time that the request sat in the queue
before a thread from the pool was available? Is this the best "response
time" measurement to use for assessing your maximum shard size?
On Thursday, November 29, 2012 6:27:22 AM UTC-5, Igor Motov wrote:
The time reported by elasticsearch in the "took" field is the time that
it took elasticsearch to process the query on its side. It doesn't include
serializing the request into JSON on the client
sending the request over the network
deserializing the request from JSON on the server
serializing the response into JSON on the server
sending the response over the network
deserializing the response from JSON on the client
On Wednesday, November 28, 2012 3:06:08 PM UTC-5, Daniel Weitzenfeld
wrote:
I'm running some tests on Elastic Search to see if it's performant in
our particular situation.
My queries average ~130ms, according to the 'took' field returned by
Elastic Search. But when I time the query round trip, I'm getting ~820ms.
What might be causing this discrepancy? Am I misunderstanding the 'took'
field?
My setup:
Elasticsearch 0.19.0
amazon ec2 m2.xlarge box running Centos
talking to Elasticsearch via python pyes package, e.g.:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.