100% peak of processing in one of the machines

Good morning, how are you?
I'm a friend of Waldemar Neto, and I work with elastic from V1.2,
Today I released production on v2.4.2, but I noticed that every 5 minutes I have a 100% peak of processing in one of the machines, nothing that affects the performance, but can you give me a tip, a way I can Have configured, to occur this peak?

Our bases are small, 23,000 documents, a total size of 130 mb, my settings are:
index.refresh_interval: 30s
index.requests.cache.enable": true
indices.requests.cache.size: 4%
indices.queries.cache.size: 20%
indices.breaker.total.limit: 85%
indices.fielddata.cache.size: 20%

Hello Rafael, you could use hot_thread API to see whats going on:

https://www.elastic.co/guide/en/elasticsearch/reference/2.4/cluster-nodes-hot-threads.html

Is your elasticsearch is alone on this machine ? Whats the hardwarde config. ?

Yes I have only the elastic in each machine, it follows the standard configurations.
Intel (R) Xeon (R) CPU E5-2630 v3 @ 2.40GHz, 8 colors, 20Gb Ram

this is result of hot-trheads:

::: {esprodnovo1}{ktcu_LCcRNedF70aEXDF5A}{192.168.1.77}{192.168.1.77:9300}
Hot threads at 2017-03-01T14:42:31.582Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

::: {esprodnovo2}{B1K9hzbPRrSGD7ZjT4xgSA}{192.168.1.78}{192.168.1.78:9300}
Hot threads at 2017-03-01T14:41:25.017Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

2.2% (10.7ms out of 500ms) cpu usage by thread 'elasticsearch[esprodnovo2][transport_client_worker][T#7]{New I/O worker #7}'
 2/10 snapshots sharing following 15 elements
   org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
   org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
   org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
   org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
   org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
   org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
   org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
   org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
   org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
   org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
   org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
   org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   java.lang.Thread.run(Thread.java:745)

2.1% (10.7ms out of 500ms) cpu usage by thread 'elasticsearch[esprodnovo2][transport_client_worker][T#8]{New I/O worker #8}'
 3/10 snapshots sharing following 7 elements
   org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
   org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
   org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
   org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   java.lang.Thread.run(Thread.java:745)

Can you get hot_thread when you see CPU pikes?

The peaks are very fast, it is difficult to get at the exact moment, you tell me where I get a good practical example in java to solve this problem?
QueryPhaseExecutionException [Result window is too large, from + size must be less than or equal to: [999999999] but was [1410065416]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.