Netty threads eats all of CPU

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our
application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and
found some other people using Netty come across with same problem. Do you
have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800
nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]
java.lang.Thread.State: RUNNABLE

at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)
- locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)

at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)
- locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)

- locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)

at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000
nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)

at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
- locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
- locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)

at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:
- <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our
application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and
found some other people using Netty come across with same problem. Do you
have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]

java.lang.Thread.State: RUNNABLE

at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)

at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)

  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)

  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)

at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)

at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)

at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in
our application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem
and found some other people using Netty come across with same problem. Do
you have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

Heya,

Which version of the jdk are you using? Which vendor? How often does this happen, and does it happen only on the client? You can try and move maybe to the blocking IO mode and see if it solves the problem, we can try and ping Trustin / Netty once we have more info. In order to move to blocking mode, you can set network.tcp.blocking to true.

-shay.banon
On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.com wrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)
    at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.com wrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our application. We noticed that our application eats 100% of available CPU. When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and found some other people using Netty come across with same problem. Do you have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

Hi,
We are using

Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)

We use ES for a long time this is the first time we come across this problem
or we may not notify previously.

On Fri, Apr 8, 2011 at 1:02 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Heya,

Which version of the jdk are you using? Which vendor? How often does this
happen, and does it happen only on the client? You can try and move maybe to
the blocking IO mode and see if it solves the problem, we can try and ping
Trustin / Netty once we have more info. In order to move to blocking mode,
you can set network.tcp.blocking to true.

-shay.banon

On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]

java.lang.Thread.State: RUNNABLE

at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)

at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)

  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)

  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)

at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)

at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)

at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our
application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and
found some other people using Netty come across with same problem. Do you
have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

Heya,

So, you might have hit the bug that was fixed in the jdk 1.6u18, any reason why you don't use a newer version?

-shay.banon
On Friday, April 8, 2011 at 8:53 AM, Mustafa Sener wrote:

Hi,
We are using

Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)

We use ES for a long time this is the first time we come across this problem or we may not notify previously.

On Fri, Apr 8, 2011 at 1:02 AM, Shay Banon shay.banon@elasticsearch.com wrote:

Heya,

Which version of the jdk are you using? Which vendor? How often does this happen, and does it happen only on the client? You can try and move maybe to the blocking IO mode and see if it solves the problem, we can try and ping Trustin / Netty once we have more info. In order to move to blocking mode, you can set network.tcp.blocking to true.

-shay.banon
On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.com wrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)
    at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.com wrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our application. We noticed that our application eats 100% of available CPU. When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and found some other people using Netty come across with same problem. Do you have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

No we can upgrade it. But I just want to warn you about this problem.

On Fri, Apr 8, 2011 at 11:52 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Heya,

So, you might have hit the bug that was fixed in the jdk 1.6u18, any
reason why you don't use a newer version?

-shay.banon

On Friday, April 8, 2011 at 8:53 AM, Mustafa Sener wrote:

Hi,
We are using

Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)

We use ES for a long time this is the first time we come across this
problem or we may not notify previously.

On Fri, Apr 8, 2011 at 1:02 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Heya,

Which version of the jdk are you using? Which vendor? How often does this
happen, and does it happen only on the client? You can try and move maybe to
the blocking IO mode and see if it solves the problem, we can try and ping
Trustin / Netty once we have more info. In order to move to blocking mode,
you can set network.tcp.blocking to true.

-shay.banon

On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]

java.lang.Thread.State: RUNNABLE

at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)

at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)

  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)

  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)

at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)

at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)

at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our
application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and
found some other people using Netty come across with same problem. Do you
have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

Shay,

Is that "network.tcp.blocking" something new in ES? I cannot find it in the
docs.

Regards.

On Fri, Apr 8, 2011 at 12:02 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Heya,

Which version of the jdk are you using? Which vendor? How often does this
happen, and does it happen only on the client? You can try and move maybe to
the blocking IO mode and see if it solves the problem, we can try and ping
Trustin / Netty once we have more info. In order to move to blocking mode,
you can set network.tcp.blocking to true.

-shay.banon

On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]

java.lang.Thread.State: RUNNABLE

at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)

at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)

  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)

  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)

at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)

at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)

at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our
application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and
found some other people using Netty come across with same problem. Do you
have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

The network.tcp.blocking is not in the docs, I added it to test perf between blocking and non blocking, and kept it there. It is recommended not to use it (well, unless there are bugs with nio).
On Friday, April 8, 2011 at 1:24 PM, Enrique Medina Montenegro wrote:

Shay,

Is that "network.tcp.blocking" something new in ES? I cannot find it in the docs.

Regards.

On Fri, Apr 8, 2011 at 12:02 AM, Shay Banon shay.banon@elasticsearch.com wrote:

Heya,

Which version of the jdk are you using? Which vendor? How often does this happen, and does it happen only on the client? You can try and move maybe to the blocking IO mode and see if it solves the problem, we can try and ping Trustin / Netty once we have more info. In order to move to blocking mode, you can set network.tcp.blocking to true.

-shay.banon
On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.com wrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)
    at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.com wrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our application. We noticed that our application eats 100% of available CPU. When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and found some other people using Netty come across with same problem. Do you have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

Understood :slight_smile:

On Fri, Apr 8, 2011 at 1:31 PM, Shay Banon shay.banon@elasticsearch.comwrote:

The network.tcp.blocking is not in the docs, I added it to test perf
between blocking and non blocking, and kept it there. It is recommended not
to use it (well, unless there are bugs with nio).

On Friday, April 8, 2011 at 1:24 PM, Enrique Medina Montenegro wrote:

Shay,

Is that "network.tcp.blocking" something new in ES? I cannot find it in
the docs.

Regards.

On Fri, Apr 8, 2011 at 12:02 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Heya,

Which version of the jdk are you using? Which vendor? How often does this
happen, and does it happen only on the client? You can try and move maybe to
the blocking IO mode and see if it solves the problem, we can try and ping
Trustin / Netty once we have more info. In order to move to blocking mode,
you can set network.tcp.blocking to true.

-shay.banon

On Thursday, April 7, 2011 at 11:17 PM, Mustafa Sener wrote:

Following seems similar problem with mine

https://issues.apache.org/jira/browse/DIRMINA-678

On Thu, Apr 7, 2011 at 11:14 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

I have following stack trace for thread mentioned

New I/O client boss #30" daemon prio=10 tid=0x0000000046c6f800 nid=0x46a1 runnable [0x0000000058ad3000..0x0000000058ad3d10]

java.lang.Thread.State: RUNNABLE

at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at java.util.HashSet.iterator(HashSet.java:153)
at sun.nio.ch.SelectorImpl.processDeregisterQueue(SelectorImpl.java:127)

  • locked <0x00002aaaf3a9c5c0> (a java.util.HashSet)

at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:69)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9d108> (a sun.nio.ch.Util$1)

  • locked <0x00002aaaf3a9d0f0> (a java.util.Collections$UnmodifiableSet)

  • locked <0x00002aaaf3a9c530> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)

at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

New I/O client boss #27" daemon prio=10 tid=0x0000000046302000 nid=0x46a3 runnable [0x0000000058bd4000..0x0000000058bd4c10]

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)

at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)

  • locked <0x00002aaaf3a9b5c0> (a sun.nio.ch.Util$1)
  • locked <0x00002aaaf3a9b5a8> (a java.util.Collections$UnmodifiableSet)
  • locked <0x00002aaaf3a9a9d0> (a sun.nio.ch.EPollSelectorImpl)

at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:239)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:619)

Locked ownable synchronizers:

  • <0x00002aaaf3a98970> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

On Thu, Apr 7, 2011 at 11:10 PM, Mustafa Sener mustafa.sener@gmail.comwrote:

Hi,
We are using ES 0.15.2. We use transport client to connect ES server in our
application. We noticed that our application eats 100% of available CPU.
When we followed threads in JVM we found two suspicious threads which are

Thread name: CPU USAGE
New I/O client boss #30 48.736765821
New I/O client boss #27 48.9538801358

Do you have any ideas about this problem? I searched about this problem and
found some other people using Netty come across with same problem. Do you
have any suggestion to fix this issue?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

I ran into this bug again

ES is running on : JDK 1.8.0_31<nabble_embed></nabble_embed>
Client is running on : JDK 1.7.0_45
Elasticsearch version: 1.4.3

lots of threads are waiting at in client and on server both

<nabble_embed>

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x00007f45623d2090> (a sun.nio.ch.Util$2)
- locked <0x00007f45623d2078> (a java.util.Collections$UnmodifiableSet)
- locked <0x00007f45623d1eb8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:341)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:189)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase.doExecute(CloseableHttpAsyncClientBase.java:67)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase.access$000(CloseableHttpAsyncClientBase.java:38)
at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:57)
at java.lang.Thread.run(Thread.java:744)

</nabble_embed>

App is doing heavy writes, and it stays stuck in here

$ ~/tools/jdk1.8.0_31/bin/jstack 6142 | grep 'EPollArrayWrapper.epollWait' | wc -l
195

any suggestions ?

this is thread pool related config

threadpool.bulk.type: fixed
threadpool.bulk.size: 100
threadpool.bulk.queue_size: 300

threadpool.index.type: fixed
threadpool.index.size: 100
threadpool.index.queue_size: 100

indices.memory.index_buffer_size: 30%
indices.memory.min_shard_index_buffer_size: 1024mb
indices.memory.min_index_buffer_size: 1024mb

Thank you