There are plenty of spark / akka / scala / elasticsearch-hadoop
dependencies to keep track of.
Is it true that elasticsearch-hadoop needs to be compiled for a specific
spark version to run correctly on the cluster? I'm also trying to keep
track of the akka version and scala version. i.e, wil es-hadoop compiled
for spark 1.2 work with Spark 1.3 ?
When the elasticsearch-hadoop versions are released, as v2.0 v2.1,
v2.1.0.Beta3, at what point do we need to keep in mind what spark version
it was also compiled against?
i.e., is it safe to assume the es-hadoop versions are tied to a specific
spark core version?
I've been keeping the following chart in my notes to see what all the
versions are with all dependencies
Akka Version Dependencies
Current Akka Stable Release: 2.3.9
Elasticsearch-Hadoop: 2.1.0Beta3 = Spark 1.1.0
Elasticsearch-Hadoop: 2.1.0Beta3-SNAPSHOT = Spark 1.2.1
Elasticsearch-Hadoop: what about spark 1.3 ?
Spark: 1.3, Akka: 2.3.4-spark
Spark: 1.2, Akka: 2.3.4-spark
Spark: 1.1, Akka: 2.2.3-shaded-protobuf
Activator 1.2.12 comes with with Akka 2.3.4
Play 2.3.8, akka 2.3.4, scala 2.11.1 (will also work with 2.10.4 )
Play 2.2.x, akka 2.2.0
Spark Job Server 0.4.1, Spark Core 1.1.0, Akka, 2.2.4
Spark Job Server Master as of Feb 22, 2015, Spark Core 1.2.0, Akka 2.3.4,
Scala 2.10.4
Akka persistence latest 2.3.4 or later
Akka 2.3.9 is released for Scala 2.10.4 and 2.11.5
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/28ad3f78-8b3d-450a-a29d-06d3e6636cfd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.