Hadoop jar elasticsearch-yarn-2.1.1.jar -start throws ArithmeticException: / by zero

I'm trying to install version 1.6.0 on a Cloudera Hadoop. The download and install seem to have gone correctly.
hadoop jar elasticsearch-yarn-2.1.1.jar -download-es
hadoop jar elasticsearch-yarn-2.1.1.jar -install

But, when I go to start the system:
hadoop jar elasticsearch-yarn-2.1.1.jar -start containers=12
15/10/28 15:12:19 INFO impl.TimelineClientImpl: Timeline service address: http://orhddb01dxdu.dev.oclc.org:9062/ws/v1/timeline/
15/10/28 15:12:19 INFO client.AHSProxy: Connecting to Application History server at orhddb01dxdu.dev.oclc.org/10.192.215.16:9061
15/10/28 15:12:19 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
Abnormal execution:/ by zero
java.lang.ArithmeticException: / by zero
at org.elasticsearch.hadoop.yarn.util.YarnUtils.yarnAcceptableMin(YarnUtils.java:113)
at org.elasticsearch.hadoop.yarn.util.YarnUtils.minMemory(YarnUtils.java:103)
at org.elasticsearch.hadoop.yarn.compat.YarnCompat.resource(YarnCompat.java:71)

This happens as well when I omit the containers=12 parameter

hadoop version
Hadoop 2.6.0-cdh5.4.3
Subversion http://github.com/cloudera/hadoop -r 4cd9f51a3f1ef748d45b8d77d0f211ad44296d4b
Compiled by jenkins on 2015-06-25T02:32Z
Compiled with protoc 2.5.0
From source with checksum 4acea6ac185376e0b48b33695e88e7a7
This command was run using /drive1/hadoop/5.4.0.2/hadoop-2.6.0-cdh5.4.3/share/hadoop/common/hadoop-common-2.6.0-cdh5.4.3.jar

Looks like a bug - created an issue here.

Hello, it looks like I'm seeing the same issue. I'm running a 30 node healthy and operational CDH 5.5.1 cluster with the following CDH managed services (cdap, hbase, hue, impala, kafka, oozie, spark, yarn, hdfs, hive, zookeeper). Elastic search is also install on all of our nodes, but is not managed in CDH.

Install of the elasticsearch-yarn looks good, but I too am getting a divide by 0 error. Do you have a suggestion for a work around?

Here's the output I'm seeing.

[root@scc dist]# hadoop jar elasticsearch-yarn-2.1.0.jar -download-es
Destination file ./downloads/elasticsearch-1.6.0.zip already exists; aborting download...
[root@scc dist]# hadoop jar elasticsearch-yarn-2.1.0.jar -install-es
Uploaded /opt/elasticsearch/elasticsearch-hadoop-2.1.0/dist/./downloads/elasticsearch-1.6.0.zip to HDFS at hdfs://scc.silverdale.dev:8020/apps/elasticsearch/elasticsearch-1.6.0.zip
[root@scc dist]# hadoop jar elasticsearch-yarn-2.1.0.jar -install
Uploaded /opt/elasticsearch/elasticsearch-hadoop-2.1.0/dist/elasticsearch-yarn-2.1.0.jar to HDFS at hdfs://scc.silverdale.dev:8020/apps/elasticsearch/elasticsearch-yarn-2.1.0.jar
[root@scc dist]#
[root@scc dist]# hadoop fs -ls /apps/elasticsearch
Found 2 items
-rw-rw-r-- 3 root supergroup 31685943 2016-01-15 11:08 /apps/elasticsearch/elasticsearch-1.6.0.zip
-rw-rw-r-- 3 root supergroup 53820 2016-01-15 11:08 /apps/elasticsearch/elasticsearch-yarn-2.1.0.jar
[root@scc dist]# hadoop jar elasticsearch-yarn-2.1.0.jar -start
16/01/15 11:09:27 INFO client.RMProxy: Connecting to ResourceManager at scc.silverdale.dev/172.21.10.140:8032
Abnormal execution:/ by zero
java.lang.ArithmeticException: / by zero
at org.elasticsearch.hadoop.yarn.util.YarnUtils.yarnAcceptableMin(YarnUtils.java:113)
at org.elasticsearch.hadoop.yarn.util.YarnUtils.minMemory(YarnUtils.java:103)
at org.elasticsearch.hadoop.yarn.compat.YarnCompat.resource(YarnCompat.java:71)
at org.elasticsearch.hadoop.yarn.client.YarnLauncher.setupAM(YarnLauncher.java:80)
at org.elasticsearch.hadoop.yarn.client.YarnLauncher.run(YarnLauncher.java:69)
at org.elasticsearch.hadoop.yarn.cli.YarnBootstrap.start(YarnBootstrap.java:155)
at org.elasticsearch.hadoop.yarn.cli.YarnBootstrap.run(YarnBootstrap.java:100)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.elasticsearch.hadoop.yarn.cli.YarnBootstrap.main(YarnBootstrap.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
[root@scc dist]# hadoop jar elasticsearch-yarn-2.1.0.jar -start containers=2
16/01/15 11:09:55 INFO client.RMProxy: Connecting to ResourceManager at scc.silverdale.dev/172.21.10.140:8032
Abnormal execution:/ by zero
java.lang.ArithmeticException: / by zero
at org.elasticsearch.hadoop.yarn.util.YarnUtils.yarnAcceptableMin(YarnUtils.java:113)
at org.elasticsearch.hadoop.yarn.util.YarnUtils.minMemory(YarnUtils.java:103)
at org.elasticsearch.hadoop.yarn.compat.YarnCompat.resource(YarnCompat.java:71)
at org.elasticsearch.hadoop.yarn.client.YarnLauncher.setupAM(YarnLauncher.java:80)
at org.elasticsearch.hadoop.yarn.client.YarnLauncher.run(YarnLauncher.java:69)
at org.elasticsearch.hadoop.yarn.cli.YarnBootstrap.start(YarnBootstrap.java:155)
at org.elasticsearch.hadoop.yarn.cli.YarnBootstrap.run(YarnBootstrap.java:100)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.elasticsearch.hadoop.yarn.cli.YarnBootstrap.main(YarnBootstrap.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

If you follow the issue link above, you'll find the versions in which it was fixed - 2.2-rc1 was released last week so you can try it out.

Cheers,

Thanks, I think that solved my issue, but my Elasticsearch-Yarn cluster is shutting down after a couple of minutes. Is this still related to the bug?

16/01/15 15:13:39 INFO client.RMProxy: Connecting to ResourceManager at scc/172.21.10.140:8032
Id State Status Start Time Finish Time Tracking URL
application_1452556549828_0025 RUNNING UNDEFINED 1/15/16 3:13 PM N/A http://scc:8088/proxy/application_1452556549828_0025/A
application_1452556549828_0024 FINISHED FAILED 1/15/16 3:09 PM 1/15/16 3:09 PM http://scc:8088/proxy/application_1452556549828_0024/A
application_1452556549828_0023 FINISHED FAILED 1/15/16 3:07 PM 1/15/16 3:08 PM http://scc:8088/proxy/application_1452556549828_0023/A
application_1452556549828_0022 FINISHED FAILED 1/15/16 3:05 PM 1/15/16 3:06 PM http://scc:8088/proxy/application_1452556549828_0022/A

[root@staging1 dist]# hadoop jar /home/msadang/elasticsearch-hadoop-2.2.0-rc1/dist/elasticsearch-yarn-2.2.0-rc1.jar -status
16/01/15 15:13:54 INFO client.RMProxy: Connecting to ResourceManager at scc/172.21.10.140:8032
Id State Status Start Time Finish Time Tracking URL
application_1452556549828_0025 RUNNING FAILED 1/15/16 3:13 PM 1/15/16 3:15 PM http://scc:8088/proxy/application_1452556549828_0025/A
application_1452556549828_0024 FINISHED FAILED 1/15/16 3:09 PM 1/15/16 3:09 PM http://scc:8088/proxy/application_1452556549828_0024/A
application_1452556549828_0023 FINISHED FAILED 1/15/16 3:07 PM 1/15/16 3:08 PM http://scc:8088/proxy/application_1452556549828_0023/A
application_1452556549828_0022 FINISHED FAILED 1/15/16 3:05 PM 1/15/16 3:06 PM http://scc:8088/proxy/application_1452556549828_0022/A

Did this happen before or is it a new development?

Completely new development. I only installed elasticsearch-yarn-2.2.0-rc1 on my cluster after reading your suggestion to follow the link. This is the first initial start up of the installation.

So if you are to start ES-Yarn 2.1 with the number of containers specified, it would work okay as oppose to ES-Yarn 2.2?

I did another install today on a Hadoop 2.7.1 distribution using the latest ES-Yarn 2.2 and everything seems to be in order. Can you double check your logs to see what is the error?

Maybe it's my user error because I don't see any issues in the logs. Is the ES-yarn job supposed to stop right after it starts? I ran elasticsearch-yarn-2.2.0-rc1ljar -start and got a log output with no errors. I say user error because, maybe this is the expected behavior?

I thought also that it could be that I had the install files from the 2.1 installation in HDFS that were causing the issues. I removed those and started from scratch, and am seeing the same issues.

I've put my log output below for reference.

20160118 09:11:53,863 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 33
20160118 09:11:54,612 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 33 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
20160118 09:11:54,613 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 33 submitted by user root
20160118 09:11:54,613 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1452556549828_0033
20160118 09:11:54,613 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=172.21.10.110 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=applic ation_1452556549828_0033
20160118 09:11:54,613 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from NEW to NEW_SAVING
20160118 09:11:54,613 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1452556549828_0033
20160118 09:11:54,616 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from NEW_SAVING to SUBMITTED
20160118 09:11:54,617 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Accepted application application_1452556549828_0033 from user: root, in queue: default, currently num of applicatio ns: 2
20160118 09:11:54,617 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from SUBMITTED to ACCEPTED
20160118 09:11:54,617 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1452556549828_0033_000001
20160118 09:11:54,617 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from NEW to SUBMITTED
20160118 09:11:54,618 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Added Application Attempt appattempt_1452556549828_0033_000001 to scheduler from user: root
20160118 09:11:54,618 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from SUBMITTED to SCHEDULED
20160118 09:11:55,070 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000001 Container Transitioned from NEW to ALLOCATED
20160118 09:11:55,070 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1452556549828_0033 CONTAINERID=container_1452556549828_0033_01_000001
20160118 09:11:55,070 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1452556549828_0033_01_000001 of capacity <memory:1024, vCores:1> on host sc25.silverdale.de v:8041, which has 1 containers, <memory:1024, vCores:1> used and <memory:343040, vCores:35> available after allocation
20160118 09:11:55,070 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : sc25.silverdale.dev:8041 for container : container_1452556549828_0033_01_000001
20160118 09:11:55,071 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
20160118 09:11:55,071 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1452556549828_0033_000001
20160118 09:11:55,071 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1452556549828_0033 AttemptId: appattempt_1452556549828_0033_000001 MasterCont ainer: Container: [ContainerId: container_1452556549828_0033_01_000001, NodeId: sc25.silverdale.dev:8041, NodeHttpAddress: sc25.silverdale.dev:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: Co ntainerToken, service: 172.21.10.165:8041 }, ]
20160118 09:11:55,071 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from SCHEDULED to ALLOCATED_SAVING
20160118 09:11:55,075 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from ALLOCATED_SAVING to ALLOCATED

20160118 09:11:55,075 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1452556549828_0033_000001
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1452556549828_0033_01_000001, NodeId: sc25.silverdale.dev:8041, No deHttpAddress: sc25.silverdale.dev:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.21.10.165:8041 }, ] for AM appattempt_1452556549828_0033_000001
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1452556549828_0033_01_000001 : {{JAVA_HOME}}/bin/java org.elasticsearch.hadoop.yarn. am.ApplicationMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1452556549828_0033_000001
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1452556549828_0033_000001
20160118 09:11:55,090 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1452556549828_0033_01_000001, NodeId: sc25.silverdale.dev:8041 , NodeHttpAddress: sc25.silverdale.dev:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.21.10.165:8041 }, ] for AM appattempt_1452556549828_0033_000001
20160118 09:11:55,090 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from ALLOCATED to LAUNCHED
20160118 09:11:55,299 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000001 Container Transitioned from ACQUIRED to RUNNING
20160118 09:11:57,299 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1452556549828_0033_000001
20160118 09:11:57,300 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=172.21.10.165 OPERATION=Register App Master TARGET=ApplicationMasterService RESULT=SUCCESS APPID=applic ation_1452556549828_0033 APPATTEMPTID=appattempt_1452556549828_0033_000001
20160118 09:11:57,300 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from LAUNCHED to RUNNING
20160118 09:11:57,300 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from ACCEPTED to RUNNING
20160118 09:11:57,345 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from NEW to ALLOCATED
20160118 09:11:57,345 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1452556549828_0033 CONTAINERID=container_1452556549828_0033_01_000002
20160118 09:11:57,346 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1452556549828_0033_01_000002 of capacity <memory:2048, vCores:1> on host sc22.silverdale.de v:8041, which has 1 containers, <memory:2048, vCores:1> used and <memory:342016, vCores:35> available after allocation
20160118 09:12:02,362 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : sc22.silverdale.dev:8041 for container : container_1452556549828_0033_01_000002
20160118 09:12:02,363 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from ALLOCATED to ACQUIRED
20160118 09:12:04,314 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from ACQUIRED to RUNNING
20160118 09:12:05,315 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from RUNNING to COMPLETED
20160118 09:12:05,316 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Completed container: container_1452556549828_0033_01_000002 in state: COMPLETED event:FINISHED
20160118 09:12:05,316 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1452556549828_0033 CONT AINERID=container_1452556549828_0033_01_000002

Next part of log

20160118 09:11:55,075 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1452556549828_0033_000001
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1452556549828_0033_01_000001, NodeId: sc25.silverdale.dev:8041, No deHttpAddress: sc25.silverdale.dev:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.21.10.165:8041 }, ] for AM appattempt_1452556549828_0033_000001
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1452556549828_0033_01_000001 : {{JAVA_HOME}}/bin/java org.elasticsearch.hadoop.yarn. am.ApplicationMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1452556549828_0033_000001
20160118 09:11:55,076 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1452556549828_0033_000001
20160118 09:11:55,090 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1452556549828_0033_01_000001, NodeId: sc25.silverdale.dev:8041 , NodeHttpAddress: sc25.silverdale.dev:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.21.10.165:8041 }, ] for AM appattempt_1452556549828_0033_000001
20160118 09:11:55,090 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from ALLOCATED to LAUNCHED
20160118 09:11:55,299 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000001 Container Transitioned from ACQUIRED to RUNNING
20160118 09:11:57,299 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1452556549828_0033_000001
20160118 09:11:57,300 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=172.21.10.165 OPERATION=Register App Master TARGET=ApplicationMasterService RESULT=SUCCESS APPID=applic ation_1452556549828_0033 APPATTEMPTID=appattempt_1452556549828_0033_000001
20160118 09:11:57,300 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from LAUNCHED to RUNNING
20160118 09:11:57,300 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from ACCEPTED to RUNNING
20160118 09:11:57,345 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from NEW to ALLOCATED
20160118 09:11:57,345 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1452556549828_0033 CONTAINERID=container_1452556549828_0033_01_000002
20160118 09:11:57,346 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1452556549828_0033_01_000002 of capacity <memory:2048, vCores:1> on host sc22.silverdale.de v:8041, which has 1 containers, <memory:2048, vCores:1> used and <memory:342016, vCores:35> available after allocation
20160118 09:12:02,362 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : sc22.silverdale.dev:8041 for container : container_1452556549828_0033_01_000002
20160118 09:12:02,363 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from ALLOCATED to ACQUIRED
20160118 09:12:04,314 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from ACQUIRED to RUNNING
20160118 09:12:05,315 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000002 Container Transitioned from RUNNING to COMPLETED
20160118 09:12:05,316 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Completed container: container_1452556549828_0033_01_000002 in state: COMPLETED event:FINISHED
20160118 09:12:05,316 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1452556549828_0033 CONT AINERID=container_1452556549828_0033_01_000002

20160118 09:12:23,588 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Null container completed...
20160118 09:12:23,652 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1452556549828_0033_000001 with final state: FINISHING, and exit status: - 1000
20160118 09:12:23,652 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from RUNNING to FINAL_SAVING
20160118 09:12:23,653 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1452556549828_0033 with final state: FINISHING
20160118 09:12:23,656 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from RUNNING to FINAL_SAVING
20160118 09:12:23,656 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1452556549828_0033
20160118 09:12:23,657 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from FINAL_SAVING to FINISHING
20160118 09:12:23,664 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from FINAL_SAVING to FINISHING
20160118 09:12:23,758 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1452556549828_0033 unregistered successfully.
20160118 09:12:23,759 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1452556549828_0033 unregistered successfully.
20160118 09:12:24,342 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1452556549828_0033_01_000001 Container Transitioned from RUNNING to COMPLETED
20160118 09:12:24,342 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1452556549828_0033_000001
20160118 09:12:24,342 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Completed container: container_1452556549828_0033_01_000001 in state: COMPLETED event:FINISHED
20160118 09:12:24,342 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1452556549828_0033_000001
20160118 09:12:24,342 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1452556549828_0033 CONT AINERID=container_1452556549828_0033_01_000001
20160118 09:12:24,342 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1452556549828_0033_000001 State change from FINISHING to FINISHED
20160118 09:12:24,342 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1452556549828_0033_01_000001 of capacity <memory:1024, vCores:1> on host sc25.silverdale.de v:8041, which currently has 0 containers, <memory:0, vCores:0> used and <memory:344064, vCores:36> available, release resources=true
20160118 09:12:24,343 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1452556549828_0033 State change from FINISHING to FINISHED
20160118 09:12:24,343 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Application attempt appattempt_1452556549828_0033_000001 released container container_1452556549828_0033_01_000001 on node: host: sc25.silverdale.dev:8041 #containers=0 available=344064 used=0 with event: FINISHED
20160118 09:12:24,343 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1452556549 828_0033
20160118 09:12:24,343 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1452556549828_0033_000001
20160118 09:12:24,343 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Application appattempt_1452556549828_0033_000001 is done. finalState=FINISHED
20160118 09:12:24,343 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1452556549828_0033,name=Elasticsearch-YARN,user=root,queue=root.root,state=FINISHED,trackingUr l=http://scc.silverdale.dev:8088/proxy/application_1452556549828_0033/A,appMasterHost=,startTime=1453137114612,finishTime=1453137143652,finalStatus=FAILED
20160118 09:12:24,343 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1452556549828_0033 requests cleared

@westcoastadmin Sorry, maybe you can read those posts but I can't. Would you mind using something like gist or pastebin to post the logs.

Also try looking at the failed container to see whether anything shows up in its archived/historic log.