Hi, I'm running an elasticsearch client in a Scala project and doing some tests in a local environnement with a proxy (setup in $httpProxy, and my local machines are in the -Dhttp.nonProxyHosts java opts since they are local).
I have configured my connector with this:
val client = ElasticClient.remote(settings, uri)
and only set the clusterName.
but when I run my integration test, i got a:
ERROR [11-20-2015 14:03:41,478] [XCI=] org.apache.spark.executor.Executor - Exception in task 3.0 in stage 1.0 (TID 7)
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [_nodes/transport] failed; server[null] returned [407|Proxy Authentication Required: [... this is a 407 proxy html page ...] at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:335) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:300) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:304) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:118) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.hadoop.rest.RestClient.discoverNodes(RestClient.java:100) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverNodesIfNeeded(InitializationUtils.java:58) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:371) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:38) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEsWithMeta$1.apply(EsSpark.scala:87) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEsWithMeta$1.apply(EsSpark.scala:87) ~[elasticsearch-spark_2.10-2.1.0.Beta4.jar:2.1.0.Beta4]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63) ~[spark-core_2.10-1.4.0.jar:1.4.0]
at org.apache.spark.scheduler.Task.run(Task.scala:70) ~[spark-core_2.10-1.4.0.jar:1.4.0]
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) [spark-core_2.10-1.4.0.jar:1.4.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80]
TRACE [11-20-2015 14:03:41,480] [XCI=] org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport - Rx [...]
TRACE [11-20-2015 14:03:41,480] [XCI=] org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport - Closing HTTP transport to localhost:9200
TRACE [11-20-2015 14:03:41,480] [XCI=] org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport - Closing HTTP transport to localhost:9200
I'm pretty sure it's because the client want to discover other nodes but cannot reach them because it is behind a proxy.
What options am I missing? where should I add them?
Thanks.