My ES stuck once a week with no reason?

I have a 4 box ES installed, the version that I am using is 0.90.10 but it
fails once in a week. What I am getting is 50X error in kibana. When I
check the log one of the nodes are stuck. It is fine after restart. The
memories are fine for them. What else can I check ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/555d73de-1f65-4ca0-ba19-8917e2cc645f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

On usual days I am seeing this log all the time in only one box:

@40000000532216d90c452aac [2014-03-13
16:36:31,205][DEBUG][action.admin.cluster.stats] [Lloigoroth] failed to
execute on node [Mo2-u0RSQT6qqbMjW1CWag]
@40000000532216d90c45327c
org.elasticsearch.transport.RemoteTransportException: [Ketch,
Dan][inet[/172.22.4.23:9300]][cluster/stats/n]
@40000000532216d90c453664 Caused by:
org.elasticsearch.transport.ActionNotFoundTransportException: No handler
for action [cluster/stats/n]
@40000000532216d90c453a4c at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:205)
@40000000532216d90c45809c at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:108)
@40000000532216d90c458484 at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
@40000000532216d90c45886c at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
@40000000532216d90c45980c at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
@40000000532216d90c459bf4 at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
@40000000532216d90c45af7c at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
@40000000532216d90c45b364 at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
@40000000532216d90c45b74c at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
@40000000532216d90c45c304 at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
@40000000532216d90c45c6ec at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
@40000000532216d90c45cad4 at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
@40000000532216d90c45da74 at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
@40000000532216d90c45da74 at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
@40000000532216d90c45de5c at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
@40000000532216d90c45f1e4 at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
@40000000532216d90c45f5cc at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
@40000000532216d90c45f5cc at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
@40000000532216d90c45f9b4 at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
@40000000532216d90c46056c at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
@40000000532216d90c460954 at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
@40000000532216d90c460d3c at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
@40000000532216d90c460d3c at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
@40000000532216d90c466714 at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
@40000000532216d90c466afc at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
@40000000532216d90c466afc at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
@40000000532216d90c46826c at java.lang.Thread.run(Thread.java:724)

On Thursday, March 13, 2014 1:35:25 PM UTC-7, Khasan Bold wrote:

I have a 4 box ES installed, the version that I am using is 0.90.10 but it
fails once in a week. What I am getting is 50X error in kibana. When I
check the log one of the nodes are stuck. It is fine after restart. The
memories are fine for them. What else can I check ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8a19ffd0-dd80-4334-9f57-7e3641688b52%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.