Using Spark DataSource with ES Hadoop

I am using the 5.0.0Alpha to do some testing with Spark 2.0.0 and ElasticSearch. All I have done was with a local instance of ElasticSearch and Spark. I need to read an ElasticSearch index from a cluster that is not on my local box. I have read the old documentation and it appears that es.nodes needed to be set in the conf. I tried setting that in spark shell and it did not work. Is this still the correct way to change the nodes to point to a different cluster? Please help!

Hello,

That is indeed still the way to target elasticsearch clusters. Nothing has changed on that front. Could you post some more information on what your environment looks like and how you're going about the configuration steps?