I'm using Spark 1.4.0, and just start experimenting elasticsearch-hadoop.
I don't want to let Spark adding some libraries, so I made a uber jar whenever I made a new drivers.
I added "org.elasticsearch" %% "elasticsearch-spark" % "2.1.0" to build.sbt, and ran "sbt assembly", and met issue from deduplication.
java.lang.RuntimeException: deduplicate: different file contents found in the following:
I excluded spark-core from elasticsearch-spark with no luck.
So, I'd like to know about best practice to exclude libraries so that I can maintain uber jar which contains elasticsearch-spark.
Thanks in advance!