No node available and OutOfMemoryError exception

Hi,

I insert document to ES with high speed and large number. After several
minutes I will get no node available exception. The stack trace as below:

Exception in thread "pool-2-thread-82"
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:205)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:97)
at
org.elasticsearch.client.support.AbstractClient.bulk(AbstractClient.java:141)
at
org.elasticsearch.client.transport.TransportClient.bulk(TransportClient.java:328)
at
org.elasticsearch.action.bulk.BulkRequestBuilder.doExecute(BulkRequestBuilder.java:128)
at
org.elasticsearch.action.support.BaseRequestBuilder.execute(BaseRequestBuilder.java:53)
at
org.elasticsearch.action.support.BaseRequestBuilder.execute(BaseRequestBuilder.java:47)

This is log file content:

2012-09-25 08:18:32,430][WARN ][transport.netty ] [Princess
Python] Exception caught on netty layer [[id: 0x5fcb98d6, /127.0.0.1:50773
=> /127.0.0.1:9300]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.io.stream.StreamInput.readBytesHolder(StreamInput.java:57)
at
org.elasticsearch.common.io.stream.StreamInput.readBytesReference(StreamInput.java:67)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readBytesReference(AdapterStreamInput.java:36)
at
org.elasticsearch.action.index.IndexRequest.readFrom(IndexRequest.java:700)
at
org.elasticsearch.action.bulk.BulkRequest.readFrom(BulkRequest.java:294)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:339)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:243)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:154)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

The environment:
ES version: 0.19.8 single node
JVM parmeter: ES_MAX_MEM=4g

How can I handle this? or How can improve the index performance? Thanks
very much!

--

Can anybody help? Thanks!

--

Sawyer Zhu wrote:

I insert document to ES with high speed and large number. After several
minutes I will get no node available exception. The stack trace as below:

It looks like you're probably creating bulk requests that are too
large. Have you tried reducing the number of documents you're
supplying with each one?

-Drew

--

I agree with Drew.

Are you able to give us an idea about what sort of 'high speed' and 'large
number' you're using?

On Friday, October 12, 2012 11:21:22 AM UTC+13, Drew Raines wrote:

Sawyer Zhu wrote:

I insert document to ES with high speed and large number. After several
minutes I will get no node available exception. The stack trace as
below:

It looks like you're probably creating bulk requests that are too
large. Have you tried reducing the number of documents you're
supplying with each one?

-Drew

--

do you also search at the same time and maybe facet?

if you index a lot of documents and you search at the same time a lot of
caches and JVM memory is used to make your fresh documents visible. Yet 4GB
is not much but lets get down to the root cause.

simon

On Friday, October 12, 2012 1:04:23 AM UTC+2, Chris Male wrote:

I agree with Drew.

Are you able to give us an idea about what sort of 'high speed' and 'large
number' you're using?

On Friday, October 12, 2012 11:21:22 AM UTC+13, Drew Raines wrote:

Sawyer Zhu wrote:

I insert document to ES with high speed and large number. After several
minutes I will get no node available exception. The stack trace as
below:

It looks like you're probably creating bulk requests that are too
large. Have you tried reducing the number of documents you're
supplying with each one?

-Drew

--