I am observing a high disk read i/o in a elasticsearch node. I am using one node cluster with 5 shards.
Environment
- Elasticssearch 2.3.1
- Disk SSD
- Cores - 16
- RAM - 64 GB
Segments and merging could be one of the issue. But as mentioned in this link I don't see any INFO log stating now throttling indexing
.
Can someone let me know what could be the problem and how can I debug this issue?
The node stats looks like below:-1:
https://gist.github.com/debraj-manna/296956d1456ac8094a6c532b238537cb
index.refresh_interva
l and index.translog.flush_threshold_size
are both set to default values.
hot-threads report is as follows:- (_nodes/hot_threads?pretty") -
Hot threads at 2017-10-17T12:45:39.517Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
71.6% (357.8ms out of 500ms) cpu usage by thread 'elasticsearch[Axum][[denorm][1]: Lucene Merge Thread #6011]'
3/10 snapshots sharing following 13 elements
org.apache.lucene.index.MultiTermsEnum.pushTop(MultiTermsEnum.java:275)
org.apache.lucene.index.MultiTermsEnum.next(MultiTermsEnum.java:301)
org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.next(FilterLeafReader.java:195)
org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:438)
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:193)
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95)
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4075)
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3655)
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94)
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)