Throttle Elasticsearch warmer threads

Hi,

I am evaluating my cluster performance after migrating to 5.2. Everything looks fine from indexing to search to updates in terms of thread pools. whenever I run the thread_pool cat API all the queues are empty or have a couple of requests being processed except for the warmer threads. Same can be said about viewing hot threads as it shows warmers as the majority of threads running at any given time; almost 99% of the hot threads are warmers.

I want to know if there is a way to throttle the warmer threads, delay the threads from constantly running or if there is a way to disable warmers temporarely as I am experiencing some high CPU usage compared to what I had in ES 1.7 before upgradiing and I just want to know if this is caused by the warmer threads or from one of my requests.

Can you share your hot_threads output? Which warmer is using the CPU?

You can check the Hot_Threads output here: https://drive.google.com/file/d/0B3lcT_0qgTuGTzZTQkxSc2ZrLXc/view?usp=sharing

Here is a snippet as Discuss did not allow me to paste the entire thing:

73.2% (365.9ms out of 500ms) cpu usage by thread 'elasticsearch[Zoolz_06][warmer][T#5]'
2/10 snapshots sharing following 21 elements
org.apache.lucene.index.MultiTermsEnum$TermMergeQueue.fillTop(MultiTermsEnum.java:429)
org.apache.lucene.index.MultiTermsEnum.pullTop(MultiTermsEnum.java:267)
org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:305)
org.apache.lucene.index.MultiDocValues$OrdinalMap.(MultiDocValues.java:554)
org.apache.lucene.index.MultiDocValues$OrdinalMap.build(MultiDocValues.java:511)
org.apache.lucene.index.MultiDocValues$OrdinalMap.build(MultiDocValues.java:475)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.buildOrdinalMap(ParentChildIndexFieldData.java:175)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.localGlobalDirect(ParentChildIndexFieldData.java:199)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.localGlobalDirect(ParentChildIndexFieldData.java:71)
org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.lambda$load$1(IndicesFieldDataCache.java:159)
org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$$Lambda$1674/1438035241.load(Unknown Source)
org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:398)
org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:154)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadGlobal(ParentChildIndexFieldData.java:160)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadGlobal(ParentChildIndexFieldData.java:71)
org.elasticsearch.index.IndexWarmer$FieldDataWarmer.lambda$warmReader$1(IndexWarmer.java:142)
org.elasticsearch.index.IndexWarmer$FieldDataWarmer$$Lambda$1672/1908661636.run(Unknown Source)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
8/10 snapshots sharing following 19 elements
org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:301)
org.apache.lucene.index.MultiDocValues$OrdinalMap.(MultiDocValues.java:554)
org.apache.lucene.index.MultiDocValues$OrdinalMap.build(MultiDocValues.java:511)
org.apache.lucene.index.MultiDocValues$OrdinalMap.build(MultiDocValues.java:475)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.buildOrdinalMap(ParentChildIndexFieldData.java:175)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.localGlobalDirect(ParentChildIndexFieldData.java:199)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.localGlobalDirect(ParentChildIndexFieldData.java:71)
org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.lambda$load$1(IndicesFieldDataCache.java:159)
org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache$$Lambda$1674/1438035241.load(Unknown Source)
org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:398)
org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:154)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadGlobal(ParentChildIndexFieldData.java:160)
org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData.loadGlobal(ParentChildIndexFieldData.java:71)
org.elasticsearch.index.IndexWarmer$FieldDataWarmer.lambda$warmReader$1(IndexWarmer.java:142)
org.elasticsearch.index.IndexWarmer$FieldDataWarmer$$Lambda$1672/1908661636.run(Unknown Source)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

Any ideas? My cluster CPUs are averaging around 70% on normal workload

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.