Hi,
I insert document to ES with high speed and large number. After several
minutes I will get no node available exception. The stack trace as below:
Exception in thread "pool-2-thread-82"
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:205)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:97)
at
org.elasticsearch.client.support.AbstractClient.bulk(AbstractClient.java:141)
at
org.elasticsearch.client.transport.TransportClient.bulk(TransportClient.java:328)
at
org.elasticsearch.action.bulk.BulkRequestBuilder.doExecute(BulkRequestBuilder.java:128)
at
org.elasticsearch.action.support.BaseRequestBuilder.execute(BaseRequestBuilder.java:53)
at
org.elasticsearch.action.support.BaseRequestBuilder.execute(BaseRequestBuilder.java:47)
This is log file content:
2012-09-25 08:18:32,430][WARN ][transport.netty ] [Princess
Python] Exception caught on netty layer [[id: 0x5fcb98d6, /127.0.0.1:50773
=> /127.0.0.1:9300]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.io.stream.StreamInput.readBytesHolder(StreamInput.java:57)
at
org.elasticsearch.common.io.stream.StreamInput.readBytesReference(StreamInput.java:67)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readBytesReference(AdapterStreamInput.java:36)
at
org.elasticsearch.action.index.IndexRequest.readFrom(IndexRequest.java:700)
at
org.elasticsearch.action.bulk.BulkRequest.readFrom(BulkRequest.java:294)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:339)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:243)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:154)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
The environment:
ES version: 0.19.8 single node
JVM parmeter: ES_MAX_MEM=4g
How can I handle this? or How can improve the index performance? Thanks
very much!
--