I'm trying to use es-hadoop with Zeppelin, I added the jar properly, but when I got to define es.nodes in Zeppelin's Spark interpreter settings, it didn't seem to work.
The reason turned out to be that Zeppelin has (recently) made a change, to not propagate any configuration value that doesn't start with "spark." into the SparkConf.
The reasoning was that although you can pass such configuration values into SparkConf via code upon initialization, that Spark allegedly does not propagate non-spark configuration values into the executors anyway.(https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L372)
Bottom line, starting Zeppelin 0.7.1, there is no way to use es-hadoop.
The question then is: how does es-hadoop use the es.nodes value, and does or does it not transfer that values to all nodes? What I'd like to figure out is if es.hadoop is misusing SparkConf by expecting a non-spark.* value in it, or is this all a needless restriction on the Zeppelin side?
Your help is greatly appreciated.