No node available, a really big headache!


(spancer ray) #1

Hi Guys,

This node available exception is really troublesome, I always got this when doing scroll search. I've got two nodes (64GB memory), the detail exception message is like below:
WARN - Log4jESLogger.internalWarn(129) | [PC-20131030YGAR] exception caught on transport layer [[id: 0xd9b9ad6d, /0:0:0:0:0:0:0:0:64697 => /192.168.1.110:9300]], closing connection java.io.IOException: An existing connection was forcibly closed by the remote host.at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218) at sun.nio.ch.IOUtil.read(IOUtil.java:186) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722)

Can anyone explain why is this always happening?


(David Pilato) #2

How much RAM do you give to the JVM?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 nov. 2013 à 01:53, spancer ray spancer.roc.ray@gmail.com a écrit :

Hi Guys,

This node available exception is really troublesome, I always got this when
doing scroll search. I've got two nodes (64GB memory), the detail exception
message is like below:

Can anyone explain why is this always happening?

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/No-node-available-a-really-big-headache-tp4044975.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #3