Too many threads on elasticsearch

Hi.

I've got Elasticsearch cluster with three nodes.
I'm getting stacktraces like this:
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
locked <0x0000000406628550> (a sun.nio.ch.Util$2)
locked <0x0000000406628540> (a java.util.Collections$UnmodifiableSet)
locked <0x0000000406628428> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

From what I have read, it looks like a netty bug, but it should be fixed in java 1.7.0_11 and higher. I'm using java 1.7.0_67.
Elasticsearch version is 1.7.3
Those are nodes stats which I assume are related:

  "http" : {
    "current_open" : 6,
    "total_opened" : 405481
  }

Results are very similar on each node. There should be only one instance of java client which connects to ElasticSearch.

Any ideas what may cause such errors? is it a problem with java client, configuration or maybe it's a bug?