Hi,
I'm getting the following error.
Error: Multiple ES-Hadoop versions detected in the classpath; please use only one
I can not find any other instances of elasticsearch-hadoop on the server and not in my jar.
I even ran:
find . -name "*.jar" -exec zipgrep -i "elasticsearch-hadoop" '{}' ;
Just to make sure there was nothing in any of the jars in the classpath. We are using a Cloudera distribution.
I am reading from HBase and writing to ES (5.1.2). Reading and writing to ES is fine and reading and writing to HBase is fine in a mapreduce job. But when reading from HBase and writing to ES (in the reducer) it errors. If I don't write to ES and simply print to stdout I have no issues either.
Driving me nuts
Any hints?
Little more info.
Added some debug code to the reducer (see very bottom). Looks like it's producing two jars when using both HBase and ES in the same mapreduce job.
Target: org/elasticsearch/hadoop/util/Version.class
URL: jar:file:/u09/hadoop/yarn/nm/usercache/root/appcache/application_1483942272862_0507/filecache/10/job.jar/job.jar!/org/elasticsearch/hadoop/util/Version.class
URL: jar:file:/u12/hadoop/yarn/nm/usercache/root/filecache/3346/xxxxxxxxx.jar!/org/elasticsearch/hadoop/util/Version.class
Debug code:
String target = Version.class.getName().replace(".", "/").concat(".class");
System.out.println("Target: " + target);
Enumeration res = null;
try {
res = Version.class.getClassLoader().getResources(target);
} catch (IOException ex) {
System.out.println("Issue: " );
}
if (res != null) {
List<URL> urls = Collections.list(res);
for (URL url : urls) {
System.out.println("URL: " + url.toString());
}
}