CPU utilization crossing 98%

Please suggest.. Observing high CPU utilization on cluster node on one server and queried for nodes hot thread, Below is the response received please help.

Always i have restart elastic search service in services.msc and search is not responding when ever any node is utilizing high CPU usage

When i exeucte hot_threads command,

::: [Prod-Node-Two-7][MAjfdhsORja9ACRDSj2MBg][pro-phvapp03][inet[/x.x.x.7:9300]]
Hot threads at 2019-03-27T05:42:44.594Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

103.1% (515.6ms out of 500ms) cpu usage by thread 'elasticsearch[Prod-Node-Two-7][search][T#8]'
3/10 snapshots sharing following 17 elements
org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:288)
org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:636)
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:683)
org.apache.lucene.search.QueryWrapperFilter.getDocIdSet(QueryWrapperFilter.java:55)
org.elasticsearch.search.fetch.matchedqueries.MatchedQueriesFetchSubPhase.addMatchedQueries(MatchedQueriesFetchSubPhase.java:99)
org.elasticsearch.search.fetch.matchedqueries.MatchedQueriesFetchSubPhase.hitExecute(MatchedQueriesFetchSubPhase.java:80)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:194)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:516)
org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:868)
org.elasticsearch.search.action.SearchServiceTransportAction$FetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:862)
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
5/10 snapshots sharing following 15 elements
org.apache.lucene.codecs.blocktree.IntersectTermsEnum.next(IntersectTermsEnum.java:447)
org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:114)
org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:157)
org.apache.lucene.search.QueryWrapperFilter$1.iterator(QueryWrapperFilter.java:59)
org.elasticsearch.search.fetch.matchedqueries.MatchedQueriesFetchSubPhase.addMatchedQueries(MatchedQueriesFetchSubPhase.java:123)
org.elasticsearch.search.fetch.matchedqueries.MatchedQueriesFetchSubPhase.hitExecute(MatchedQueriesFetchSubPhase.java:80)
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:194)
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:516)

when executed _cluster/stats
{"timestamp":1553666499641,"cluster_name":"clusterProdHAPool","status":"green","indices":{"count":2,"shards":{"total":20,"primaries":10,"replication":1.0,"index":{"shards":{"min":10,"max":10,"avg":10.0},"primaries":{"min":5,"max":5,"avg":5.0},"replication":{"min":1.0,"max":1.0,"avg":1.0}}},"docs":{"count":83664,"deleted":0},"store":{"size_in_bytes":96927364,"throttle_time_in_millis":109288},"fielddata":{"memory_size_in_bytes":108940,"evictions":0},"filter_cache":{"memory_size_in_bytes":373720,"evictions":0},"id_cache":{"memory_size_in_bytes":0},"completion":{"size_in_bytes":0},"segments":{"count":68,"memory_in_bytes":1679744,"index_writer_memory_in_bytes":0,"index_writer_max_memory_in_bytes":449793224,"version_map_memory_in_bytes":0,"fixed_bit_set_memory_in_bytes":0},"percolate":{"total":0,"time_in_millis":0,"current":0,"memory_size_in_bytes":-1,"memory_size":"-1b","queries":0}},"nodes":{"count":{"total":3,"master_only":0,"data_only":0,"master_data":3,"client":0},"versions":["1.7.2"],"os":{"available_processors":20,"mem":{"total_in_bytes":38653095936},"cpu":[{"vendor":"Intel","model":"Xeon","mhz":2300,"total_cores":8,"total_sockets":4,"cores_per_socket":2,"cache_size_in_bytes":-1,"count":3}]},"process":{"cpu":{"percent":62},"open_file_descriptors":{"min":1123,"max":1774,"avg":1520}},"jvm":{"max_uptime_in_millis":9608128557,"versions":[{"version":"1.7.0_80","vm_name":"Java HotSpot(TM) 64-Bit Server VM","vm_version":"24.80-b11","vm_vendor":"Oracle Corporation","count":3}],"mem":{"heap_used_in_bytes":828044784,"heap_max_in_bytes":3114795008},"threads":273},"fs":{"total_in_bytes":429487280128,"free_in_bytes":409375649792,"available_in_bytes":409375649792,"disk_reads":22752798,"disk_writes":3281554,"disk_io_op":26034352,"disk_read_size_in_bytes":10569035776,"disk_write_size_in_bytes":3256944128,"disk_io_size_in_bytes":13825979904,"disk_queue":"0"},"plugins":}}

when executed _cat/thread_pool?V
host ip bulk.active bulk.queue bulk.rejected index.active index.queue index.rejected search.active search.queue search.rejected
pro-phvapp02 x.x.x.6 0 0 0 0 0 0 0 0 0
pro-phvapp04 x.x.x.25 0 0 0 0 0 0 0 0 0
pro-phvapp03 x.x.x.7 0 0 0 0 0 0 0 0 0

Please help on this issue.. recently from past 2-3 months we are having this issue very frequently. We have been using this version since 3.5 years

Hi,

There's one post similar to your problem closed 3 year ago so you maybe use the same version.
Take a look about the suggestions here:

I also get some high CPU with recent version and reindexing monthly index in yearly help to fix.

I reduce the number of index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.