Getting error "Failure when receiving data from the peer" for loading bulk data

Hi,

I am having a frequently occurring problem that is affecting the loading bulk data to the index
I have elasticsearch 1.7 installed with java 8.
When I am trying to load bulk data using curl command the problem occurs,(Failure when receiving data from the peer)
I wrote shell script and inside that I mentioned the curl command where I load bulk data using json documents.

Sample shell script file:-

for file_name in load_json01*.json
do
echo ${file_name} >> bulk_load.log
date >> bulk_load.log
curl --user admin:password -XPOST 'http://localhost:port/load_data/data/_bulk' --data-binary @${file_name} >> bulk_load01_details.log
done

I executed the shell script in putty, The following error as shown

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
60 174M 0 0 60 104M 0 110M 0:00:01 --:--:-- 0:00:01 110M
curl: (56) Failure when receiving data from the peer

Please guide me to fix this issue.

Thanks,
Ganeshbabu R

What do the ES logs show?

Mark,

I am running this shell file as a background process, The following command is

nohup ./bulk_load.sh &

When I checking the log nohup.out (using command cat nohup.out) I found the following details

I Provided Sample details because it got thousands of data

{"took":42207,"errors":false,"items":[{"index":{"_index":"bulk_item","_type":"item","_id":"10129","_version":1,"status":201}}

But error is showing in putty as shown below

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
60 174M 0 0 60 104M 0 110M 0:00:01 --:--:-- 0:00:01 110M
curl: (56) Failure when receiving data from the peer

Regards
Ganesh

Right, but what about the actual Elasticsearch instance, there should be something in the logs for that.

Hi mark,

Sorry for replying late..

I missed this error part while updating. Below is an issue while loading bulk data.

curl (23) Failed to writing body --> Could you please tell us why it is failed to write the data?

In that actual ElasticSearch Instance, the below error is shown in the log file while during bulk load.

[2015-10-06 07:18:52,933][WARN ][monitor.jvm ] [QA_MASTER] [gc][old][3545][835] duration [16.1s], collections [1]/[16.9s], total [16.1s]/[1.3h], memory [9.6gb]->[9.3gb]/[9.7gb], all_pools {[young] [2.1gb]->[2gb]/[2.1gb]}{[survivor] [154.8mb]->[0b]/[274.5mb]}{[old] [7.3gb]->[7.3gb]/[7.3gb]}
[2015-10-06 07:19:07,800][WARN ][monitor.jvm ] [QA_MASTER] [gc][old][3547][836] duration [12.9s], collections [1]/[13.1s], total [12.9s]/[1.4h], memory [9.7gb]->[9.3gb]/[9.7gb], all_pools {[young] [2.1gb]->[2gb]/[2.1gb]}{[survivor] [245.2mb]->[0b]/[274.5mb]}{[old] [7.3gb]->[7.3gb]/[7.3gb]}
[2015-10-06 07:19:21,772][WARN ][monitor.jvm ] [QA_MASTER] [gc][old][3548][837] duration [13.7s], collections [1]/[14.7s], total [13.7s]/[1.4h], memory [9.3gb]->[9.3gb]/[9.7gb], all_pools {[young] [2gb]->[2gb]/[2.1gb]}{[survivor] [0b]->[0b]/[274.5mb]}{[old] [7.3gb]->[7.3gb]/[7.3gb]}
[2015-10-06 07:19:21,784][WARN ][shield.transport.netty ] [QA_MASTER] Caught exception while handling client http traffic, closing connection [id: 0xc2df9476, /localhost:58382 => /localhost:9200]
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException: HTTP content length exceeded 104857600 bytes.
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:169)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:135)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.handler.ipfilter.IpFilteringHandlerImpl.handleUpstream(IpFilteringHandlerImpl.java:154)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:33
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

Please let me know if there is any fix for this issue.

Regards,
Ganeshbabu R

I'd suggest you look around for solutions to that :slight_smile: