Server failed excpetion


(Abhishek Jajoria) #1

I am getting the following exception when I index documents after some time
even though the server is up and running .

2/05/18 12:07:21 INFO client.transport: [Neophyte] failed to get node info
for [#transport#-1][inet[/192.168.5.169:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/192.168.5.169:9300]][cluster/nodes/info] request_id [81658] timed
out after [5387ms]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:347)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
12/05/18 12:07:22 WARN elasticsearch.transport: [Neophyte] Received
response for a request that has timed out, sent [8457ms] ago, timed out
[3070ms] ago, action [cluster/nodes/info], node
[[#transport#-1][inet[/192.168.5.169:9300]]], id [81658]
12/05/18 12:07:21 WARN transport.netty: [Neophyte] Exception caught on
netty layer [[id: 0x0088df60, /192.168.5.212:51196 => /192.168.5.169:9300]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.compress.lzf.BufferRecycler.allocDecodeBuffer(BufferRecycler.java:150)
at
org.elasticsearch.common.io.stream.LZFStreamInput.(LZFStreamInput.java:91)
at
org.elasticsearch.common.io.stream.CachedStreamInput.instance(CachedStreamInput.java:48)
at
org.elasticsearch.common.io.stream.CachedStreamInput.getCharArray(CachedStreamInput.java:79)
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:160)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:49)
at
org.elasticsearch.action.index.IndexResponse.readFrom(IndexResponse.java:142)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:254)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:233)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)


(Shay Banon) #2

You are running out of memory on the client that is indexing the data.

On Fri, May 18, 2012 at 8:41 AM, jajoria abhishek <
jajoria.abhishek@gmail.com> wrote:

I am getting the following exception when I index documents after some
time even though the server is up and running .

2/05/18 12:07:21 INFO client.transport: [Neophyte] failed to get node info
for [#transport#-1][inet[/192.168.5.169:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/192.168.5.169:9300]][cluster/nodes/info] request_id [81658]
timed out after [5387ms]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:347)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
12/05/18 12:07:22 WARN elasticsearch.transport: [Neophyte] Received
response for a request that has timed out, sent [8457ms] ago, timed out
[3070ms] ago, action [cluster/nodes/info], node
[[#transport#-1][inet[/192.168.5.169:9300]]], id [81658]
12/05/18 12:07:21 WARN transport.netty: [Neophyte] Exception caught on
netty layer [[id: 0x0088df60, /192.168.5.212:51196 => /192.168.5.169:9300
]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.compress.lzf.BufferRecycler.allocDecodeBuffer(BufferRecycler.java:150)
at
org.elasticsearch.common.io.stream.LZFStreamInput.(LZFStreamInput.java:91)
at
org.elasticsearch.common.io.stream.CachedStreamInput.instance(CachedStreamInput.java:48)
at
org.elasticsearch.common.io.stream.CachedStreamInput.getCharArray(CachedStreamInput.java:79)
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:160)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:49)
at
org.elasticsearch.action.index.IndexResponse.readFrom(IndexResponse.java:142)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:254)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:233)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)


(Abhishek Jajoria) #3

How can I avoid these exceptions I allocate size for elasticsearch from
512m to 1024m still i get memory out of heap exception as I have limited
ram of 4G and I want to index 14 miln of documents.

On Mon, May 21, 2012 at 1:49 AM, Shay Banon kimchy@gmail.com wrote:

You are running out of memory on the client that is indexing the data.

On Fri, May 18, 2012 at 8:41 AM, jajoria abhishek <
jajoria.abhishek@gmail.com> wrote:

I am getting the following exception when I index documents after some
time even though the server is up and running .

2/05/18 12:07:21 INFO client.transport: [Neophyte] failed to get node
info for [#transport#-1][inet[/192.168.5.169:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/192.168.5.169:9300]][cluster/nodes/info] request_id [81658]
timed out after [5387ms]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:347)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
12/05/18 12:07:22 WARN elasticsearch.transport: [Neophyte] Received
response for a request that has timed out, sent [8457ms] ago, timed out
[3070ms] ago, action [cluster/nodes/info], node
[[#transport#-1][inet[/192.168.5.169:9300]]], id [81658]
12/05/18 12:07:21 WARN transport.netty: [Neophyte] Exception caught on
netty layer [[id: 0x0088df60, /192.168.5.212:51196 => /192.168.5.169:9300
]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.compress.lzf.BufferRecycler.allocDecodeBuffer(BufferRecycler.java:150)
at
org.elasticsearch.common.io.stream.LZFStreamInput.(LZFStreamInput.java:91)
at
org.elasticsearch.common.io.stream.CachedStreamInput.instance(CachedStreamInput.java:48)
at
org.elasticsearch.common.io.stream.CachedStreamInput.getCharArray(CachedStreamInput.java:79)
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:160)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:49)
at
org.elasticsearch.action.index.IndexResponse.readFrom(IndexResponse.java:142)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:254)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:233)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)


(Ivan Brusic) #4

Many questions: Are you bulk indexing the documents? What is your bulk
size? Multi-threaded? Synchronous or asynchronous index calls?

You simply might be keeping too many documents in memory before
executing an index to ES.

--
Ivan

On Mon, May 21, 2012 at 9:58 PM, jajoria abhishek
jajoria.abhishek@gmail.com wrote:

How can I avoid these exceptions I allocate size for elasticsearch from
512m to 1024m still i get memory out of heap exception as I have limited
ram of 4G and I want to index 14 miln of documents.

On Mon, May 21, 2012 at 1:49 AM, Shay Banon kimchy@gmail.com wrote:

You are running out of memory on the client that is indexing the data.

On Fri, May 18, 2012 at 8:41 AM, jajoria abhishek
jajoria.abhishek@gmail.com wrote:

I am getting the following exception when I index documents after some
time even though the server is up and running .

2/05/18 12:07:21 INFO client.transport: [Neophyte] failed to get node
info for [#transport#-1][inet[/192.168.5.169:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/192.168.5.169:9300]][cluster/nodes/info] request_id [81658] timed
out after [5387ms]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:347)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
12/05/18 12:07:22 WARN elasticsearch.transport: [Neophyte] Received
response for a request that has timed out, sent [8457ms] ago, timed out
[3070ms] ago, action [cluster/nodes/info], node
[[#transport#-1][inet[/192.168.5.169:9300]]], id [81658]
12/05/18 12:07:21 WARN transport.netty: [Neophyte] Exception caught on
netty layer [[id: 0x0088df60, /192.168.5.212:51196 => /192.168.5.169:9300]]
java.lang.OutOfMemoryError: Java heap space
at
org.elasticsearch.common.compress.lzf.BufferRecycler.allocDecodeBuffer(BufferRecycler.java:150)
at
org.elasticsearch.common.io.stream.LZFStreamInput.(LZFStreamInput.java:91)
at
org.elasticsearch.common.io.stream.CachedStreamInput.instance(CachedStreamInput.java:48)
at
org.elasticsearch.common.io.stream.CachedStreamInput.getCharArray(CachedStreamInput.java:79)
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:160)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:49)
at
org.elasticsearch.action.index.IndexResponse.readFrom(IndexResponse.java:142)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:254)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:233)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)


(Abhishek Jajoria) #5

no bulk indexing just creating JSON and adding values in the field from
database then I add them into index like

XContentBuilder b = XContentFactory.jsonBuilder().startObject();
b.startObject("Item");
b.field("A2",results.getString("N1Type"));
b.endObject();
b.endObject();
IndexRequestBuilder irb =
client.prepareIndex("astrology_sdi","astrologystatic").setSource(b);
irb.execute().actionGet();

On Tue, May 22, 2012 at 10:36 PM, Ivan Brusic ivan@brusic.com wrote:

Many questions: Are you bulk indexing the documents? What is your bulk
size? Multi-threaded? Synchronous or asynchronous index calls?

You simply might be keeping too many documents in memory before
executing an index to ES.

--
Ivan

On Mon, May 21, 2012 at 9:58 PM, jajoria abhishek
jajoria.abhishek@gmail.com wrote:

How can I avoid these exceptions I allocate size for elasticsearch from
512m to 1024m still i get memory out of heap exception as I have limited
ram of 4G and I want to index 14 miln of documents.

On Mon, May 21, 2012 at 1:49 AM, Shay Banon kimchy@gmail.com wrote:

You are running out of memory on the client that is indexing the data.

On Fri, May 18, 2012 at 8:41 AM, jajoria abhishek
jajoria.abhishek@gmail.com wrote:

I am getting the following exception when I index documents after some
time even though the server is up and running .

2/05/18 12:07:21 INFO client.transport: [Neophyte] failed to get node
info for [#transport#-1][inet[/192.168.5.169:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/192.168.5.169:9300]][cluster/nodes/info] request_id [81658]
timed

out after [5387ms]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at

org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:347)

at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)

at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)

at java.lang.Thread.run(Thread.java:619)
12/05/18 12:07:22 WARN elasticsearch.transport: [Neophyte] Received
response for a request that has timed out, sent [8457ms] ago, timed out
[3070ms] ago, action [cluster/nodes/info], node
[[#transport#-1][inet[/192.168.5.169:9300]]], id [81658]
12/05/18 12:07:21 WARN transport.netty: [Neophyte] Exception caught on
netty layer [[id: 0x0088df60, /192.168.5.212:51196 => /
192.168.5.169:9300]]

java.lang.OutOfMemoryError: Java heap space
at

org.elasticsearch.common.compress.lzf.BufferRecycler.allocDecodeBuffer(BufferRecycler.java:150)

at

org.elasticsearch.common.io.stream.LZFStreamInput.(LZFStreamInput.java:91)

at

org.elasticsearch.common.io.stream.CachedStreamInput.instance(CachedStreamInput.java:48)

at

org.elasticsearch.common.io.stream.CachedStreamInput.getCharArray(CachedStreamInput.java:79)

at

org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:160)

at

org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:49)

at

org.elasticsearch.action.index.IndexResponse.readFrom(IndexResponse.java:142)

at

org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:254)

at

org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:233)

at

org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:141)

at

org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:95)

at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)

at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)

at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)

at

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)

at

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)

at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)

at

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)

at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)

at java.lang.Thread.run(Thread.java:619)


(system) #6