No we can upgrade it. But I just want to warn you about this problem.
Heya,
So, you might have hit the bug that was fixed in the jdk 1.6u18, any
reason why you don't use a newer version?
-shay.banon
On Friday, April 8, 2011 at 8:53 AM, Mustafa Sener wrote:
Hi,
We are using
Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)
We use ES for a long time this is the first time we come across this
problem or we may not notify previously.
On Fri, Apr 8, 2011 at 1:02 AM, Shay Banon shay.banon@elasticsearch.comwrote:
Heya,
Which version of the jdk are you using? Which vendor? How often does this
happen, and does it happen only on the client? You can try and move maybe to
the blocking IO mode and see if it solves the problem, we can try and ping
Trustin / Netty once we have more info. In order to move to blocking mode,
you can set network.tcp.blocking to true.
-shay.banon
On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:
Following seems similar problem with mine
[DIRMINA-678] NioProcessor 100% CPU usage on Linux (epoll selector bug) - ASF JIRA
On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.comwrote:
I have following stack trace for thread mentioned
New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)
- locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
-
locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)
-
locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)
-
locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
- locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
- locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Locked ownable synchronizers:
- <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.comwrote:
Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our
application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are
Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358
Do you have any ideas about this problem? I searched about this problem and
found some other people using Netty come across with same problem. Do you
have any suggestion to fix this issue?
Mustafa Sener
www.ifountain.com
--
Mustafa Sener
www.ifountain.com
--
Mustafa Sener
www.ifountain.com
--
Mustafa Sener
www.ifountain.com