Interpretation of elasticsearch hot threads

The following are the output
of http://localhost:9200/_nodes/hot_threads?pretty when like 10 query hit
on the elasticsearch cluster 0.90.7. I'm wondering what is the managment
and scheduler in the following correspond to? With the output below, is
this a concern and if it is, how can we improve the cpu usage? Thank you.

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread 

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread 

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

/Jason

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ff60e900-bcab-473d-a6d3-82a475e148db%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

not sure if this is a concern yet.

Is your system under high load?
Are you querying for statistics a lot (this is where the CPU load goes to
and I am wondering why)?
Are you using the completion suggester?

--Alex

On Mon, Dec 2, 2013 at 3:36 PM, Jason Wee peichieh@gmail.com wrote:

The following are the output of
http://localhost:9200/_nodes/hot_threads?pretty when like 10 query hit on
the elasticsearch cluster 0.90.7. I'm wondering what is the managment and
scheduler in the following correspond to? With the output below, is this a
concern and if it is, how can we improve the cpu usage? Thank you.

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

/Jason

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ff60e900-bcab-473d-a6d3-82a475e148db%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM_hrAAQFq0N%3Dyh9wtrYW29uFi3rXPh8GFP6TgX5q8MK5g%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey Alex,

Yes, normally with the three nodes cluster, it hover around 7-12, but right
now I check, each of them has gone up to 20-25.
We regulary poll the elasticsearch api for monitoring purpose.
No, we dont' use auto completion but we do have sorting and scrolling
through the result.

Should that be of concern now?

/Jason

On Tue, Dec 3, 2013 at 12:16 AM, Alexander Reelsen alr@spinscale.de wrote:

Hey,

not sure if this is a concern yet.

Is your system under high load?
Are you querying for statistics a lot (this is where the CPU load goes to
and I am wondering why)?
Are you using the completion suggester?

--Alex

On Mon, Dec 2, 2013 at 3:36 PM, Jason Wee peichieh@gmail.com wrote:

The following are the output of
http://localhost:9200/_nodes/hot_threads?pretty when like 10 query hit
on the elasticsearch cluster 0.90.7. I'm wondering what is the managment
and scheduler in the following correspond to? With the output below, is
this a concern and if it is, how can we improve the cpu usage? Thank you.

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

/Jason

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ff60e900-bcab-473d-a6d3-82a475e148db%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAGCwEM_hrAAQFq0N%3Dyh9wtrYW29uFi3rXPh8GFP6TgX5q8MK5g%40mail.gmail.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4itwpOYt8vDePcY0CN3ZC18%2BZSg13BrhLH4%3DsVxzvCOnvHw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

sorry, I wasnt clear in my first mail, so let me tell, what I saw in the
stats.

The hot threads output showed, that a fair amount of time spent was in this
method

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

This method is called, when the statistics for the completion suggester are
generated. However you do not use the completion suggester and it still
seems to take time. How regurlarly are you polling the stats? Can you
switch that off and check if it changes your load pattern? If not, can you
rerun the hot threads API (while not polling the stats API)?

--Alex

On Mon, Dec 2, 2013 at 11:11 PM, Jason Wee peichieh@gmail.com wrote:

Hey Alex,

Yes, normally with the three nodes cluster, it hover around 7-12, but
right now I check, each of them has gone up to 20-25.
We regulary poll the elasticsearch api for monitoring purpose.
No, we dont' use auto completion but we do have sorting and scrolling
through the result.

Should that be of concern now?

/Jason

On Tue, Dec 3, 2013 at 12:16 AM, Alexander Reelsen alr@spinscale.dewrote:

Hey,

not sure if this is a concern yet.

Is your system under high load?
Are you querying for statistics a lot (this is where the CPU load goes to
and I am wondering why)?
Are you using the completion suggester?

--Alex

On Mon, Dec 2, 2013 at 3:36 PM, Jason Wee peichieh@gmail.com wrote:

The following are the output of
http://localhost:9200/_nodes/hot_threads?pretty when like 10 query hit
on the elasticsearch cluster 0.90.7. I'm wondering what is the managment
and scheduler in the following correspond to? With the output below, is
this a concern and if it is, how can we improve the cpu usage? Thank you.

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

/Jason

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ff60e900-bcab-473d-a6d3-82a475e148db%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAGCwEM_hrAAQFq0N%3Dyh9wtrYW29uFi3rXPh8GFP6TgX5q8MK5g%40mail.gmail.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHO4itwpOYt8vDePcY0CN3ZC18%2BZSg13BrhLH4%3DsVxzvCOnvHw%40mail.gmail.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM-7tKjtAa23cRrvsj_LOT0c8HXC_6wZmiidL%3DmRD3fu1Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Alex,

Thank you for your knowledge sharing. We monitor es nodes every 5minutes. I
think what you share had give indicator to improve on the monitoring
of 5 minutes interval.

Any idea what is causing the below?

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'

/Jason

On Tue, Dec 3, 2013 at 4:38 PM, Alexander Reelsen alr@spinscale.de wrote:

Hey,

sorry, I wasnt clear in my first mail, so let me tell, what I saw in the
stats.

The hot threads output showed, that a fair amount of time spent was in
this method

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

This method is called, when the statistics for the completion suggester
are generated. However you do not use the completion suggester and it still
seems to take time. How regurlarly are you polling the stats? Can you
switch that off and check if it changes your load pattern? If not, can you
rerun the hot threads API (while not polling the stats API)?

--Alex

On Mon, Dec 2, 2013 at 11:11 PM, Jason Wee peichieh@gmail.com wrote:

Hey Alex,

Yes, normally with the three nodes cluster, it hover around 7-12, but
right now I check, each of them has gone up to 20-25.
We regulary poll the elasticsearch api for monitoring purpose.
No, we dont' use auto completion but we do have sorting and scrolling
through the result.

Should that be of concern now?

/Jason

On Tue, Dec 3, 2013 at 12:16 AM, Alexander Reelsen alr@spinscale.dewrote:

Hey,

not sure if this is a concern yet.

Is your system under high load?
Are you querying for statistics a lot (this is where the CPU load goes
to and I am wondering why)?
Are you using the completion suggester?

--Alex

On Mon, Dec 2, 2013 at 3:36 PM, Jason Wee peichieh@gmail.com wrote:

The following are the output of
http://localhost:9200/_nodes/hot_threads?pretty when like 10 query hit
on the elasticsearch cluster 0.90.7. I'm wondering what is the managment
and scheduler in the following correspond to? With the output below, is
this a concern and if it is, how can we improve the cpu usage? Thank you.

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#1]'
10/10 snapshots sharing following 8 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:702)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.poll(LinkedTransferQueue.java:1117)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

48.0% (240ms out of 500ms) cpu usage by thread
'elasticsearch[node1][management][T#5]'
5/10 snapshots sharing following 12 elements

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:217)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.ArrayList.size(ArrayList.java:177)
java.util.AbstractList$Itr.hasNext(AbstractList.java:339)

java.util.Collections$UnmodifiableCollection$1.hasNext(Collections.java:1009)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:344)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
2/10 snapshots sharing following 14 elements
java.util.TreeMap.getEntry(TreeMap.java:335)
java.util.TreeMap.get(TreeMap.java:255)

org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:216)

org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:348)

org.elasticsearch.index.shard.service.InternalIndexShard.completionStats(InternalIndexShard.java:541)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:151)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.elasticsearch.common.hppc.Internals.rehash(Internals.java:10)

org.elasticsearch.common.hppc.ObjectObjectOpenHashMap.get(ObjectObjectOpenHashMap.java:554)

org.elasticsearch.common.collect.ImmutableOpenMap.get(ImmutableOpenMap.java:52)

org.elasticsearch.index.store.Store$StoreDirectory.fileLength(Store.java:398)

org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:42)
org.elasticsearch.index.store.Store.stats(Store.java:142)

org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:495)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:118)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:212)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

4.0% (20ms out of 500ms) cpu usage by thread

'elasticsearch[node1][scheduler][T#1]'
9/10 snapshots sharing following 9 elements
sun.misc.Unsafe.park(Native Method)

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
java.util.concurrent.DelayQueue.take(DelayQueue.java:164)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)

java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)
unique snapshot

java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
java.lang.Thread.run(Thread.java:662)

/Jason

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ff60e900-bcab-473d-a6d3-82a475e148db%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAGCwEM_hrAAQFq0N%3Dyh9wtrYW29uFi3rXPh8GFP6TgX5q8MK5g%40mail.gmail.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHO4itwpOYt8vDePcY0CN3ZC18%2BZSg13BrhLH4%3DsVxzvCOnvHw%40mail.gmail.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAGCwEM-7tKjtAa23cRrvsj_LOT0c8HXC_6wZmiidL%3DmRD3fu1Q%40mail.gmail.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4ityF9-F-B-TUsS8WKnX_uTrqEVOYjj40oBZ6JkWPM892Vw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.