Hi
Below are the details of the setup i have
Cluster running Hadoop 2.6, Spark 1.3.1, and Scala 2.10.4. I used the library "elasticsearch-spark_2.10-2.1.0.jar" in my code along with other spark libraries (like spark-core_2.10-1.3.1.jar). When i run my code in the cluster using spark-submit i get the following dump with exception
2015-07-16 10:21:14,653 INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, please add the log4j-web JAR to your web archive or server lib directory.
10:21:14.763 [main] INFO com.philips.bda.spark.SparkDriver$ - kafkaTopics value passed is [Ljava.lang.String;@5a7169a1.
10:21:14.765 [main] WARN com.philips.bda.spark.SparkDriver$ - Spark mode value is not passed. Running in spark-standalone mode.
10:21:14.765 [main] INFO com.philips.bda.spark.SparkDriver$ - Starting the Spark Applications!!!
15/07/16 10:21:15 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.UnsupportedOperationException: Not implemented by the TFS FileSystem implementation
at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:216)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2564)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2574)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at com.philips.hdfs.filesystem.FileSystemFactory$.createFileSystem(FileSystemFactory.scala:27)
at com.philips.hdfs.filesystem.FileSystemFactory$.getFileSystem(FileSystemFactory.scala:20)
at com.philips.bda.spark.SparkDriver$.Setup(SparkDriver.scala:201)
at com.philips.bda.spark.SparkDriver$.main(SparkDriver.scala:79)
at com.philips.bda.spark.SparkDriver.main(SparkDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I did not understand why spark is looking for Tachyon File System. Any help is very much appreciated.