INSERT OVERWRITE TABLE hangs


(Joe Yang) #1

Dear All:

We just insert into an external table from existing one and it hangs on kill command:
hive> INSERT OVERWRITE TABLE user01 SELECT id,name FROM user_source;
Query ID = csi_20160323182829_c33296cf-9bd6-4809-8d0e-acb521dd1fcc
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1458728756355_0001, Tracking URL = http://server1:8088/proxy/application_1458728756355_0001/
Kill Command = /home/csi/hadoop/bin/hadoop job -kill job_1458728756355_0001

any advice would be appreciated.

Thanks


(Joe Yang) #2

any comment would be appreciated...


(Joe Yang) #3

some errors from hive logs as follows:
2016-03-24 10:31:17,614 INFO [main]: exec.Utilities (Utilities.java:getBaseWork(456)) - Fil e not found: File does not exist: /tmp/hive/csi/9aeb0c93-0e42-4907-80bd-6c46c773e2cc/hive_20 16-03-24_10-31-14_052_2790586764771909959-1/-mr-10002/476d8ea7-00d0-4248-8995-dcbbd9f773e4/r educe.xml
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesy stem.java:1828)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesyste m.java:1799)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesyste m.java:1712)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNo deRpcServer.java:587)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.ge tBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)


(Magnus B├Ąck) #4

Did you really post this in the right place? Seems like a Hadoop question that's completely unrelated to Elasticsearch.


(Joe Yang) #5

we are trying to connect to es-hadoop from hive, any more comment would be appreciated.

Thanks


(Joe Yang) #6

we created that external table successfully, there's an ever message during query:

hive> select * from user01;
OK
Failed with exception java.io.IOException:org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[127.0.0.1:9200]]
Time taken: 0.128 seconds
hive>


(Joe Yang) #7

we have creaed another issue one the es-hadoop forum, please close this one, soory for any convenience.


(Yannick Welsch) #8