Relative newcomer to the elasticsearch phenomenon here. I'm trying to
rationalize a very basic problem with my service. I'm running Jetty with a
100 or so threads (standard RESTful Service with Spring MVC) and one
instance of the ES client in the JVM which seems to have around 14 or so
connections setup to the ES cluster. The ES cluster is a 2 host cluster
with about 60GB of RAM and massive 600GB SSDs. My client is a transport
client (sniffing is off at the moment). I'm on version 1.1.1.
I notice for a certain query (aggregation over a corpus of about 25k
documents) that I'm making - individual requests take anywhere between
100ms to 400ms client side when fired off one at a time (cluster is on EC2
- laptop is at home). However, when I try and run a performance test with
20 concurrent requests to the Service - the latencies just shoot up to 4s
or so. If this isn't a server side problem, I can rationalize this
happening is because requests are queueing up internally in the client
before they're fired off to ES to get a reply. I should be wrong about this
assumption because trolling the internet has told me the elasticsearch java
client uses the cached thread pool by default which would fire off a new
Thread in case this was necessary. I read through a few threads
- but nothing stands out as to why my query performance is degrading this
dramatically over just a few req/s to my Service. Maybe this isn't a client
problem at all?
I'd like to hear if there's best practices around this. Any pointers to
documents would be much appreciated.
Thanks in Advance,
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3b7b6224-7053-42e4-8a84-c957fc36ed69%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.