Very uncommon things happening with 7.17.0 ELK Cluster

I have been observing that ELK 7.17.0 version is playing with my time & efforts.
As i can observe, after changes happen in heap memory of ELK pods, have been observed that all thread pools are automatically consuming high CPU usage even though there are no transformers, search, etc.. are running. Looking for some in-sights on this.. Please...find the logs. Of-course, considering that we have configured cluster by default settings of thread pools 3 Master and 6 Data nodes.

Do we need change default thread pools configurations...

Sample log below:
88.0% [cpu=2.9%, other=85.1%] (440ms out of 500ms) cpu usage by thread 'elasticsearch[elasticsearch-xxxxxxxxxxxxxxxxxxxxxxx][refresh][T#1]' 10/10 snapshots sharing following 28 elements app//org.apache.lucene.index.DocumentsWriterPerThread.flush( app//org.apache.lucene.index.DocumentsWriter.doFlush( app//org.apache.lucene.index.DocumentsWriter.flushAllThreads( app//org.apache.lucene.index.IndexWriter.getReader( app//org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter( app//org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged( app//org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged( app//org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged( app//org.apache.lucene.index.DirectoryReader.openIfChanged( app//org.elasticsearch.index.engine.ElasticsearchReaderManager.refreshIfNeeded( app//org.elasticsearch.index.engine.ElasticsearchReaderManager.refreshIfNeeded( app// app// app//org.elasticsearch.index.engine.InternalEngine$ExternalReaderManager.refreshIfNeeded( app//org.elasticsearch.index.engine.InternalEngine$ExternalReaderManager.refreshIfNeeded( app// app// app//org.elasticsearch.index.engine.InternalEngine.refresh( app//org.elasticsearch.index.engine.InternalEngine.maybeRefresh( app//org.elasticsearch.index.shard.IndexShard.scheduledRefresh( app//org.elasticsearch.index.IndexService.maybeRefreshEngine( app//org.elasticsearch.index.IndexService.access$200( app//org.elasticsearch.index.IndexService$AsyncRefreshTask.runInternal( app// app//org.elasticsearch.common.util.concurrent.ThreadContext$ java.base@17.0.1/java.util.concurrent.ThreadPoolExecutor.runWorker( java.base@17.0.1/java.util.concurrent.ThreadPoolExecutor$ java.base@17.0.1/

All that shows is your indices are having refreshes applied. It doesn't show high CPU or threadpool use.

Thank you for this response. But not sure... transportations are failing very often. I am assessing the cluster with this thread pools as by default with thread pool.

Are you potentially using some type of very slow networked storage?

All nodes are pods with backend persistent storage from datafabric and data mount with 10GB as default.

I have no experience with Datafabric. Would it be possible to get the iowait and/or iostats the pods are experiencing, e.g. iostat -x?

Unfortunately, i can't get iostat -x command output as executable file not found in path.

OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: "iostat": executable file not found in $PATH": unknown
command terminated with exit code 126

Also i deployed using operator based ELK environment.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.