[Hadoop][Pig] Timeout issues indexing data

Hi,

I have an experimental setup with Hadoop,Pig and Elastisearch.
I'm trying to upload 129M very small records into an index using the Pig
STORE using the following commands on Cloudera CDH 4.4.0 on CentOS 6.4
64bit:

REGISTER
./elasticsearch-hadoop-1.3.0.M1/dist/elasticsearch-hadoop-1.3.0.M1-yarn.jar
...
STORE WantedOrders INTO 'orderitems/orderitem' USING
org.elasticsearch.hadoop.pig.ESStorage('es.host=node11.kluster.basjes.lan;es.port=80;es.mapping.names=date:@timestamp');

At this node11 there is a haproxy that loadbalances over the 6 ES instances
of my cluster.

Using the experimental hardware (6 very old Pentium 4 desktops) I'm able to
get about 10M inserts per hour which is good enough for my experiments.

Initially I ran the job on one of the real Hadoop clusters here and with 40
mappers pushing in the data the ES cluster failed quite quickly with a "504
Gateway Time-out The server didn't respond in time." (error from haproxy).

So I ran it as a (single threaded?) local pig job (pig -x local ...) to
avoid overloading the ES cluster.

The problem I have is that after about 6.5 hours running time (I expected
the complete run to be about 12-13 hours) it again gave a "504 Gateway
Time-out" and the pig job simply aborted.

I've tried to examine the ES logging on all nodes and found no logging at
all from around the timestamp the error occurred.

How do I solve this problem?

Niels Basjes

The full output I get:

2013-10-31 20:39:20,934 [Thread-3] WARN
org.apache.hadoop.mapred.LocalJobRunner - job_local962577332_0001
java.lang.Exception: java.lang.IllegalStateException: [POST] on
[orderitems/orderitem/_bulk] failed;
server[http://node11.kluster.basjes.lan] returned [

504
Gateway Time-out


The server didn't respond in time. ] at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:406) Caused by: java.lang.IllegalStateException: [POST] on [orderitems/orderitem/_bulk] failed; server[http://node11.kluster.basjes.lan] returned [

504 Gateway Time-out

The server didn't respond in time. ] at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:178) at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:165) at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:95) at org.elasticsearch.hadoop.rest.BufferedRestClient.flushBatch(BufferedRestClient.java:192) at org.elasticsearch.hadoop.rest.BufferedRestClient.doAddToIndex(BufferedRestClient.java:168) at org.elasticsearch.hadoop.rest.BufferedRestClient.addToIndex(BufferedRestClient.java:137) at org.elasticsearch.hadoop.mr.ESOutputFormat$ESRecordWriter.write(ESOutputFormat.java:135) at org.elasticsearch.hadoop.pig.ESStorage.putNext(ESStorage.java:155) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:558) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:285) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:268) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) 2013-10-31 20:39:23,948 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure. 2013-10-31 20:39:23,948 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local962577332_0001 has failed! Stop running all dependent jobs 2013-10-31 20:39:23,949 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2013-10-31 20:39:23,950 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 2013-10-31 20:39:23,951 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Detected Local mode. Stats reported below may be incomplete 2013-10-31 20:39:23,952 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:

HadoopVersion PigVersion UserId StartedAt FinishedAt
Features
2.0.0-cdh4.4.0 0.11.0-cdh4.4.0 nbasjes 2013-10-31 14:10:21 2013-10-31
20:39:23 UNKNOWN

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Niels,

Were you able to solve the problem? I am having similar issue.
I have wrote pig script to index a lrge file of JSON documents and running it in hadoop cluster. I have a ElasticSearch instance in one of the nodes.

It works fine for <90MB data. but fails with below error message for 93MB of dat.

java.lang.IllegalStateException: Cannot get response body for [POST][emps/emp/_bulk]

atorg.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:189)

atorg.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:165)

at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:95)

Regards,
Dipankar