Running into Elasticsearch high search latency 5-10s issue in production

Hi,

We are running an Elasticsearch Cluster in production, here’s the cluster configuration

DataNodes
Total 54
i2.2xlarge EC2 Instances
8 cpu core
61 GB memory (allocated 30G to ES)
2 X 800 SSD storage

Client Nodes
Total 6
r3.large EC2 instance
2 cpu core
15GB memory (12g allocated to ES)
1 X 32 SSD storage

Master Nodes
Total 3

Index Information
We have two index on this cluster
Index 1 (without replication)
size 2.2T
docs 653M
shards - 72 primary (2x replica)
Index 2
size 333G
docs 88M
shards - 16 primary (2x replica)

shards per node - 5

Avg indexing rate 5K/sec
Avg query rate 4k/sec

Most of our queries are served under 500ms range but we do see huge latency ~4-10 seconds for our queries multiple times a day. We looked into these queries and they were no different than the other low latency. These queries doesn’t take more than 500ms when we tried them with curl.
There is no related cpu spike during this time, the only observation is search threads are all blocked during this time, though the search thread queue doesn’t hold more than few 100 queries.
search thread_pool size - 13
search thread_pool queue - 1000

I'm going to attach Hot threads on the cluster in the next post (due to character limit)

At this point we are clueless and it’s impacting our application highly. Would appreciate if you can help us with

  1. understanding what the hot threads mean here ?
  2. what could be the possible root cause that we should investigate in further ?

Hot threads on the cluster

42.8% (214.2ms out of 500ms) cpu usage by thread 'elasticsearch[es-268][[index2][12]: Lucene Merge Thread #166763]'
8/10 snapshots sharing following 15 elements
java.lang.Thread.sleep(Native Method)
java.lang.Thread.sleep(Thread.java:340)
org.apache.lucene.store.RateLimiter$SimpleRateLimiter.pause(RateLimiter.java:151)
org.apache.lucene.store.RateLimitedFSDirectory$RateLimiterWrapper.pause(RateLimitedFSDirectory.java:96)
org.apache.lucene.store.RateLimitedIndexOutput.checkRate(RateLimitedIndexOutput.java:76)
org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:70)
org.apache.lucene.store.CompoundFileWriter$DirectCFSIndexOutput.writeBytes(CompoundFileWriter.java:356)
org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:281)
org.apache.lucene.store.Directory.copy(Directory.java:194)
org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4785)
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4266)
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3811)
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:409)
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)
2/10 snapshots sharing following 7 elements
org.apache.lucene.store.Directory.copy(Directory.java:194)
org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4785)
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4266)
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3811)
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:409)
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)

12.5% (62.3ms out of 500ms) cpu usage by thread 'elasticsearch[es-268][search][T#8]'
2/10 snapshots sharing following 18 elements
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:191)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:157)
org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:301)
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:312)
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:776)
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:767)
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
4/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:737)
java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:647)
java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1269)
org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

12.3% (61.6ms out of 500ms) cpu usage by thread 'elasticsearch[es-268][search][T#7]'
2/10 snapshots sharing following 24 elements
org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.refillDocs(Lucene41PostingsReader.java:404)
org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:529)
org.apache.lucene.search.TermScorer.advance(TermScorer.java:85)
org.apache.lucene.search.ConjunctionScorer.doNext(ConjunctionScorer.java:69)
org.apache.lucene.search.ConjunctionScorer.nextDoc(ConjunctionScorer.java:100)
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192)
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:191)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:157)
org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:301)
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:312)
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:776)
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:767)
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
4/10 snapshots sharing following 10 elements
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:737)
java.util.concurrent.LinkedTransferQueue.xfer(LinkedTransferQueue.java:647)
java.util.concurrent.LinkedTransferQueue.take(LinkedTransferQueue.java:1269)
org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

42.8% (214.2ms out of 500ms) cpu usage by thread 'elasticsearch[es-268][[index2][12]: Lucene Merge Thread #166763]'

This one is a merge being IO throttled. The others are queries doing query things. Are those stack traces taken during the slow times?

It'd be useful to have graphs of things like the query cache hits over time and the disk IO over time. Without lots of shiny graphs it is hard to tell what is up. With graphs you could say "ah, these three things spike when we get slowdowns, it is caused by X."

For what it is worth Elastic's marvel collects those graphs. I don't know other tools off hand that do but I'm sure they exist. All of the APIs are open. It really helps to graph all the things when you have a situation like this.

In addition, GC activity (logs, or ideally, charts) may help. With spikes that large, it could very well be some long running GCs that cause the whole system to pause.

Attached is our ES cluster dashboard,

Attaching more granular and current dashboard

Hi @nik9000 and @polyfractal,

Wanted to follow up if the ES dashboard snapshot was helpful in pointing to a key thing for the high query time or you
Let me know if you need more metrics to dig in further.

Thanks for your help!

I looked and couldn't really see anything. I wonder if you turn on the slowlog could you get a look at the slow queries. If you retry them on your own are they still slow? Like, are the slow ones special or are they all normal?

Hi @anupamamann,

if you look at the second dashboard and squint a bit when comparing client node heap size and query latency you can often times spot that a major GC is occurring at the same time when your query latency rises (it does not seem to be the case always though so you should also keep an eye on your data nodes).

I'd enable the GC log and start correlating the time stamps of GCs and the times when you experience latency spikes. After that you should probably start carefully tuning your GC settings.

Daniel

When ES logs a query in the slow-query log, does the reported latency include the time the query stayed in the queue ? I'm wondering if this is just a queuing problem when ES is under load.

Thanks for suggestion Nik, I looked through logs and do not see any slowlog. We have slowlog enabled on our production cluster with a threshold of "2s".

The time in the slow log starts to count when the query phase starts.