ElasticSearch crashes with 5.3.1 Client


(Ninad Pradhan) #1

Im hitting the issue which is being discussed here. Anyone know how to fix it?

Ive tried the above options with es 5.3.1 ... Im still getting the same issue, when I run our test cases 6 times, every 6th time I get the same error, its a single node cluster with the below settings...
cluster.name: test
node.name: test-0
node.master: true
node.data: true
path.data: /tmp/elasticsearch
path.logs: /tmp/eslogs

If I increase the java heap size it runs for a few more times and still crashes eventually...

12:49:31.022 INFO [PluginsService]: no modules loaded
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.script.mustache.MustachePlugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.transport.Netty3Plugin]
12:49:31.025 INFO [PluginsService]: loaded plugin [org.elasticsearch.transport.Netty4Plugin]
12:49:34.874 ERROR [Netty4Utils]: fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:140)
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:344)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:242)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
at org.elasticsearch.client.transport.TransportClient.doExecute(Transp


(Kontranavoj) #2

Upgrade to 5.4?


(Ninad Pradhan) #3

Have you seen anything in particular in 5.4 that fixes this issue, 5.4 requires a code changes for histogram builder, hence Im holding off on it @jovanmal


(Kontranavoj) #4

No, just an idea to make a change that could resolve this issue.

You can setup a new test environment to try this.

I would have started dealing with the problem on this way, when I was in your position


(Ninad Pradhan) #5

Took that advice tried 5.4.0 no joy, same exact issue.


(Kontranavoj) #6

hmm, network layer... TCP/IP protocol...

Are there configured more than one ip addresses on your server? Or just one?


(Ninad Pradhan) #7

just one


(Ninad Pradhan) #8

Is there anyone else facing this issue. posting the error again....

1:59:05.817 ERROR [Netty4Utils]: fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:140)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:83)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257)
at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257)
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1301)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265)
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:914)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:99)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:140)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Thread-22" java.lang.OutOfMemoryError: Java heap space
at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:656)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:237)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:221)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141)
at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:272)
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:160)
at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:151)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
^C^C^CException in thread "elasticsearch[client][generic][T#2]" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "elasticsearch[client][generic][T#1]" java.lang.OutOfMemoryError: GC overhead limit exceeded


(system) #9

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.