[Hadoop] ERROR security.UserGroupInformation: PriviledgedActionException as:hue (auth:SIMPLE) cause:BeeswaxException

Hi,

Still struggling to get Es + Hadoop smoothly working together.

Anyone facing this type of issue ?

ERROR security.UserGroupInformation: PriviledgedActionException as:hue (auth:SIMPLE) cause:BeeswaxException

14/02/26 05:42:34 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive

14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO parse.ParseDriver: Parsing command: use default
14/02/26 05:42:34 INFO parse.ParseDriver: Parse Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=parse start=1393422154484 end=1393422154485 duration=1>
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver: Semantic Analysis Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=semanticAnalyze start=1393422154485 end=1393422154485 duration=0>
14/02/26 05:42:34 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=compile start=1393422154483 end=1393422154485 duration=2>
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver: Starting command: use default
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=TimeToSubmit start=1393422154483 end=1393422154486 duration=3>
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=task.DDL.Stage-0 start=1393422154486 end=1393422154493 duration=7>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=runTasks start=1393422154486 end=1393422154493 duration=7>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=Driver.execute start=1393422154485 end=1393422154493 duration=8>
OK
14/02/26 05:42:34 INFO ql.Driver: OK
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=releaseLocks start=1393422154493 end=1393422154493 duration=0>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=Driver.run start=1393422154483 end=1393422154493 duration=10>
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO parse.ParseDriver: Parsing command: INSERT OVERWRITE TABLE eslogs SELECT s.time, s.ext, s.ip, s.req, s.res, s.agent FROM logs s
14/02/26 05:42:34 INFO parse.ParseDriver: Parse Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=parse start=1393422154494 end=1393422154495 duration=1>
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Get metadata for source tables
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Get metadata for subqueries
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Get metadata for destination tables
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis
14/02/26 05:42:34 INFO ppd.OpProcFactory: Processing for FS(9)
14/02/26 05:42:34 INFO ppd.OpProcFactory: Processing for SEL(8)
14/02/26 05:42:34 INFO ppd.OpProcFactory: Processing for TS(7)
14/02/26 05:42:34 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable
14/02/26 05:42:34 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Completed plan generation
14/02/26 05:42:34 INFO ql.Driver: Semantic Analysis Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=semanticAnalyze start=1393422154495 end=1393422154549 duration=54>
14/02/26 05:42:34 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:time, type:string, comment:null), FieldSchema(name:ext, type:string, comment:null), FieldSchema(name:ip, type:string, comment:null), FieldSchema(name:req, type:string, comment:null), FieldSchema(name:res, type:int, comment:null), FieldSchema(name:agent, type:string, comment:null)], properties:null)
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=compile start=1393422154493 end=1393422154549 duration=56>
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver: Starting command: INSERT OVERWRITE TABLE eslogs SELECT s.time, s.ext, s.ip, s.req, s.res, s.agent FROM logs s
Total MapReduce jobs = 1
14/02/26 05:42:34 INFO ql.Driver: Total MapReduce jobs = 1
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=TimeToSubmit start=1393422154483 end=1393422154585 duration=102>
14/02/26 05:42:34 INFO ql.Driver:
14/02/26 05:42:34 INFO ql.Driver:
Launching Job 1 out of 1
14/02/26 05:42:34 INFO ql.Driver: Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
14/02/26 05:42:34 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
14/02/26 05:42:34 INFO ql.Context: New scratch dir is hdfs://sandbox.hortonworks.com:8020/tmp/hive-beeswax-hue/hive_2014-02-26_05-42-34_494_4288001753889524446-3
14/02/26 05:42:34 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/02/26 05:42:34 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
14/02/26 05:42:34 INFO exec.Utilities: Processing alias s
14/02/26 05:42:34 INFO exec.Utilities: Adding input file hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs
14/02/26 05:42:34 INFO exec.Utilities: Content Summary not cached for hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs
14/02/26 05:42:34 INFO ql.Context: New scratch dir is hdfs://sandbox.hortonworks.com:8020/tmp/hive-beeswax-hue/hive_2014-02-26_05-42-34_494_4288001753889524446-3
14/02/26 05:42:34 INFO exec.Utilities:
14/02/26 05:42:34 INFO exec.Utilities: Serializing MapWork via kryo
14/02/26 05:42:34 INFO exec.Utilities: </PERFLOG method=serializePlan start=1393422154741 end=1393422154807 duration=66>
14/02/26 05:42:34 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
14/02/26 05:42:34 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
14/02/26 05:42:34 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/02/26 05:42:35 INFO io.CombineHiveInputFormat:
14/02/26 05:42:35 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs; using filter path hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/02/26 05:42:35 INFO input.FileInputFormat: Total input paths to process : 1
14/02/26 05:42:35 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
14/02/26 05:42:35 INFO io.CombineHiveInputFormat: number of splits 1
14/02/26 05:42:35 INFO io.CombineHiveInputFormat: </PERFLOG method=getSplits start=1393422155194 end=1393422155204 duration=10>
14/02/26 05:42:35 INFO mapreduce.JobSubmitter: number of splits:1
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/02/26 05:42:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1393416170595_0002
14/02/26 05:42:35 INFO impl.YarnClientImpl: Submitted application application_1393416170595_0002 to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
14/02/26 05:42:35 INFO mapreduce.Job: The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
14/02/26 05:42:35 INFO exec.Task: Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1393416170595_0002
14/02/26 05:42:35 INFO exec.Task: Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
14/02/26 05:42:42 INFO exec.Task: Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
14/02/26 05:42:42 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2014-02-26 05:42:42,747 Stage-0 map = 0%, reduce = 0%
14/02/26 05:42:42 INFO exec.Task: 2014-02-26 05:42:42,747 Stage-0 map = 0%, reduce = 0%
14/02/26 05:43:06 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2014-02-26 05:43:06,683 Stage-0 map = 100%, reduce = 0%
14/02/26 05:43:06 INFO exec.Task: 2014-02-26 05:43:06,683 Stage-0 map = 100%, reduce = 0%
14/02/26 05:43:09 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
Ended Job = job_1393416170595_0002 with errors
14/02/26 05:43:09 ERROR exec.Task: Ended Job = job_1393416170595_0002 with errors
14/02/26 05:43:09 INFO impl.YarnClientImpl: Killing application application_1393416170595_0002
14/02/26 05:43:09 INFO ql.Driver: </PERFLOG method=task.MAPRED.Stage-0 start=1393422154585 end=1393422189912 duration=35327>
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/02/26 05:43:09 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/02/26 05:43:09 INFO ql.Driver: </PERFLOG method=Driver.execute start=1393422154583 end=1393422189912 duration=35329>
MapReduce Jobs Launched:
14/02/26 05:43:09 INFO ql.Driver: MapReduce Jobs Launched:
14/02/26 05:43:09 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
14/02/26 05:43:09 INFO ql.Driver: Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
14/02/26 05:43:09 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
14/02/26 05:43:09 ERROR beeswax.BeeswaxServiceImpl: Exception while processing query
BeeswaxException(message:Driver returned: 2. Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-02-26 05:42:42,747 Stage-0 map = 0%, reduce = 0%
2014-02-26 05:43:06,683 Stage-0 map = 100%, reduce = 0%
Ended Job = job_1393416170595_0002 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, handle:QueryHandle(id:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5), SQLState: )
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.execute(BeeswaxServiceImpl.java:351)
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:609)
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:598)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:337)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1471)
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1.run(BeeswaxServiceImpl.java:598)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
14/02/26 05:43:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hue (auth:SIMPLE) cause:BeeswaxException(message:Driver returned: 2. Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-02-26 05:42:42,747 Stage-0 map = 0%, reduce = 0%
2014-02-26 05:43:06,683 Stage-0 map = 100%, reduce = 0%
Ended Job = job_1393416170595_0002 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, handle:QueryHandle(id:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5), SQLState: )
14/02/26 05:43:10 ERROR beeswax.BeeswaxServiceImpl: Caught unexpected exception.
java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1504)
at com.cloudera.beeswax.BeeswaxServiceImpl.doWithState(BeeswaxServiceImpl.java:772)
at com.cloudera.beeswax.BeeswaxServiceImpl.fetch(BeeswaxServiceImpl.java:980)
at com.cloudera.beeswax.api.BeeswaxService$Processor$fetch.getResult(BeeswaxService.java:987)
at com.cloudera.beeswax.api.BeeswaxService$Processor$fetch.getResult(BeeswaxService.java:971)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: BeeswaxException(message:Driver returned: 2. Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-02-26 05:42:42,747 Stage-0 map = 0%, reduce = 0%
2014-02-26 05:43:06,683 Stage-0 map = 100%, reduce = 0%
Ended Job = job_1393416170595_0002 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, handle:QueryHandle(id:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5), SQLState: )
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.execute(BeeswaxServiceImpl.java:351)
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:609)
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:598)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:337)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1471)
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1.run(BeeswaxServiceImpl.java:598)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more

Thanks.

Cheers,
Yann

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9aa69a10-09f4-408c-87be-db1749485d6a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Looking at the stacktrace the issue seems to be caused by Beeswax/Hive, independent from es-hadoop (which doesn't appear
in the stacktrace).

On 2/26/2014 3:53 PM, Yann Barraud wrote:

Hi,

Still struggling to get Es + Hadoop smoothly working together.

Anyone facing this type of issue ?

ERROR security.UserGroupInformation: PriviledgedActionException as:hue (auth:SIMPLE) cause:BeeswaxException

14/02/26 05:42:34 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=Driver.run>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=TimeToSubmit>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=compile>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=parse>
14/02/26 05:42:34 INFO parse.ParseDriver: Parsing command: use default
14/02/26 05:42:34 INFO parse.ParseDriver: Parse Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=parse start=1393422154484 end=1393422154485 duration=1>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=semanticAnalyze>
14/02/26 05:42:34 INFO ql.Driver: Semantic Analysis Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=semanticAnalyze start=1393422154485 end=1393422154485 duration=0>
14/02/26 05:42:34 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=compile start=1393422154483 end=1393422154485 duration=2>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=Driver.execute>
14/02/26 05:42:34 INFO ql.Driver: Starting command: use default
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=TimeToSubmit start=1393422154483 end=1393422154486 duration=3>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=runTasks>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=task.DDL.Stage-0>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=task.DDL.Stage-0 start=1393422154486 end=1393422154493 duration=7>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=runTasks start=1393422154486 end=1393422154493 duration=7>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=Driver.execute start=1393422154485 end=1393422154493 duration=8>
OK
14/02/26 05:42:34 INFO ql.Driver: OK
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=releaseLocks>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=releaseLocks start=1393422154493 end=1393422154493 duration=0>
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=Driver.run start=1393422154483 end=1393422154493 duration=10>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=compile>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=parse>
14/02/26 05:42:34 INFO parse.ParseDriver: Parsing command: INSERT OVERWRITE TABLE eslogs SELECT s.time, s.ext, s.ip, s.req, s.res, s.agent FROM logs s
14/02/26 05:42:34 INFO parse.ParseDriver: Parse Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=parse start=1393422154494 end=1393422154495 duration=1>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=semanticAnalyze>
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Get metadata for source tables
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Get metadata for subqueries
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Get metadata for destination tables
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis
14/02/26 05:42:34 INFO ppd.OpProcFactory: Processing for FS(9)
14/02/26 05:42:34 INFO ppd.OpProcFactory: Processing for SEL(8)
14/02/26 05:42:34 INFO ppd.OpProcFactory: Processing for TS(7)
14/02/26 05:42:34 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable
14/02/26 05:42:34 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans
14/02/26 05:42:34 INFO parse.SemanticAnalyzer: Completed plan generation
14/02/26 05:42:34 INFO ql.Driver: Semantic Analysis Completed
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=semanticAnalyze start=1393422154495 end=1393422154549 duration=54>
14/02/26 05:42:34 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:time, type:string, comment:null), FieldSchema(name:ext, type:string, comment:null), FieldSchema(name:ip, type:string, comment:null), FieldSchema(name:req, type:string, comment:null), FieldSchema(name:res, type:int, comment:null), FieldSchema(name:agent, type:string, comment:null)], properties:null)
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=compile start=1393422154493 end=1393422154549 duration=56>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=Driver.execute>
14/02/26 05:42:34 INFO ql.Driver: Starting command: INSERT OVERWRITE TABLE eslogs SELECT s.time, s.ext, s.ip, s.req, s.res, s.agent FROM logs s
Total MapReduce jobs = 1
14/02/26 05:42:34 INFO ql.Driver: Total MapReduce jobs = 1
14/02/26 05:42:34 INFO ql.Driver: </PERFLOG method=TimeToSubmit start=1393422154483 end=1393422154585 duration=102>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=runTasks>
14/02/26 05:42:34 INFO ql.Driver: <PERFLOG method=task.MAPRED.Stage-0>
Launching Job 1 out of 1
14/02/26 05:42:34 INFO ql.Driver: Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
14/02/26 05:42:34 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator
14/02/26 05:42:34 INFO ql.Context: New scratch dir is hdfs://sandbox.hortonworks.com:8020/tmp/hive-beeswax-hue/hive_2014-02-26_05-42-34_494_4288001753889524446-3
14/02/26 05:42:34 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/02/26 05:42:34 INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
14/02/26 05:42:34 INFO exec.Utilities: Processing alias s
14/02/26 05:42:34 INFO exec.Utilities: Adding input file hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs
14/02/26 05:42:34 INFO exec.Utilities: Content Summary not cached for hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs
14/02/26 05:42:34 INFO ql.Context: New scratch dir is hdfs://sandbox.hortonworks.com:8020/tmp/hive-beeswax-hue/hive_2014-02-26_05-42-34_494_4288001753889524446-3
14/02/26 05:42:34 INFO exec.Utilities: <PERFLOG method=serializePlan>
14/02/26 05:42:34 INFO exec.Utilities: Serializing MapWork via kryo
14/02/26 05:42:34 INFO exec.Utilities: </PERFLOG method=serializePlan start=1393422154741 end=1393422154807 duration=66>
14/02/26 05:42:34 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
14/02/26 05:42:34 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
14/02/26 05:42:34 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/02/26 05:42:35 INFO io.CombineHiveInputFormat: <PERFLOG method=getSplits>
14/02/26 05:42:35 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs; using filter path hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/logs
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/02/26 05:42:35 INFO input.FileInputFormat: Total input paths to process : 1
14/02/26 05:42:35 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
14/02/26 05:42:35 INFO io.CombineHiveInputFormat: number of splits 1
14/02/26 05:42:35 INFO io.CombineHiveInputFormat: </PERFLOG method=getSplits start=1393422155194 end=1393422155204 duration=10>
14/02/26 05:42:35 INFO mapreduce.JobSubmitter: number of splits:1
14/02/26 05:42:35 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/02/26 05:42:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1393416170595_0002
14/02/26 05:42:35 INFO impl.YarnClientImpl: Submitted application application_1393416170595_0002 to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
14/02/26 05:42:35 INFO mapreduce.Job: The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
14/02/26 05:42:35 INFO exec.Task: Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1393416170595_0002
14/02/26 05:42:35 INFO exec.Task: Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
14/02/26 05:42:42 INFO exec.Task: Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
14/02/26 05:42:42 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2014-02-26 05:42:42,747 Stage-0 map = 0%,  reduce = 0%
14/02/26 05:42:42 INFO exec.Task: 2014-02-26 05:42:42,747 Stage-0 map = 0%,  reduce = 0%
14/02/26 05:43:06 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2014-02-26 05:43:06,683 Stage-0 map = 100%,  reduce = 0%
14/02/26 05:43:06 INFO exec.Task: 2014-02-26 05:43:06,683 Stage-0 map = 100%,  reduce = 0%
14/02/26 05:43:09 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
Ended Job = job_1393416170595_0002 with errors
14/02/26 05:43:09 ERROR exec.Task: Ended Job = job_1393416170595_0002 with errors
14/02/26 05:43:09 INFO impl.YarnClientImpl: Killing application application_1393416170595_0002
14/02/26 05:43:09 INFO ql.Driver: </PERFLOG method=task.MAPRED.Stage-0 start=1393422154585 end=1393422189912 duration=35327>
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/02/26 05:43:09 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/02/26 05:43:09 INFO ql.Driver: </PERFLOG method=Driver.execute start=1393422154583 end=1393422189912 duration=35329>
MapReduce Jobs Launched:
14/02/26 05:43:09 INFO ql.Driver: MapReduce Jobs Launched:
14/02/26 05:43:09 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
14/02/26 05:43:09 INFO ql.Driver: Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
14/02/26 05:43:09 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
14/02/26 05:43:09 ERROR beeswax.BeeswaxServiceImpl: Exception while processing query
BeeswaxException(message:Driver returned: 2.  Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-02-26 05:42:42,747 Stage-0 map = 0%,  reduce = 0%
2014-02-26 05:43:06,683 Stage-0 map = 100%,  reduce = 0%
Ended Job = job_1393416170595_0002 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, handle:QueryHandle(id:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5), SQLState:     )
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.execute(BeeswaxServiceImpl.java:351)
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:609)
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:598)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:337)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1471)
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1.run(BeeswaxServiceImpl.java:598)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
14/02/26 05:43:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hue (auth:SIMPLE) cause:BeeswaxException(message:Driver returned: 2.  Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-02-26 05:42:42,747 Stage-0 map = 0%,  reduce = 0%
2014-02-26 05:43:06,683 Stage-0 map = 100%,  reduce = 0%
Ended Job = job_1393416170595_0002 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, handle:QueryHandle(id:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5), SQLState:     )
14/02/26 05:43:10 ERROR beeswax.BeeswaxServiceImpl: Caught unexpected exception.
java.lang.reflect.UndeclaredThrowableException
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1504)
	at com.cloudera.beeswax.BeeswaxServiceImpl.doWithState(BeeswaxServiceImpl.java:772)
	at com.cloudera.beeswax.BeeswaxServiceImpl.fetch(BeeswaxServiceImpl.java:980)
	at com.cloudera.beeswax.api.BeeswaxService$Processor$fetch.getResult(BeeswaxService.java:987)
	at com.cloudera.beeswax.api.BeeswaxService$Processor$fetch.getResult(BeeswaxService.java:971)
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
Caused by: BeeswaxException(message:Driver returned: 2.  Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1393416170595_0002, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1393416170595_0002/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1393416170595_0002
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-02-26 05:42:42,747 Stage-0 map = 0%,  reduce = 0%
2014-02-26 05:43:06,683 Stage-0 map = 100%,  reduce = 0%
Ended Job = job_1393416170595_0002 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, handle:QueryHandle(id:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5, log_context:aaf7e696-5cbe-4d4d-82ee-88dc6c7154d5), SQLState:     )
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.execute(BeeswaxServiceImpl.java:351)
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:609)
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1$1.run(BeeswaxServiceImpl.java:598)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:337)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1471)
	at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState$1.run(BeeswaxServiceImpl.java:598)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	... 3 more

Thanks.

Cheers,
Yann

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/9aa69a10-09f4-408c-87be-db1749485d6a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
Costin

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/530E0F88.7030105%40gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.