I'm experimenting Spark App which reads whole ES documents from one index.
(ES 1.2.1, Spark 1.3.0, elasticsearch-spark 2.1.0)
During experimenting, logs notify me it fails to query to whole nodes. Please see below.
> 15/07/14 13:18:54 ERROR NetworkClient: Node [Invalid target URI HEAD@null/<idx>/<type>}] failed (<ip1>:9200); selected next node [<ip2>:9200]
> ...
> 15/07/14 13:18:54 ERROR NetworkClient: Node [Invalid target URI HEAD@null/<idx>/<type>}] failed (<ipN>:9200); no other nodes left - aborting...
Below is my app codes which contains short lines.
val sparkConf = new SparkConf().setAppName("error-log-analyzer")
val indexName = <idx>
val typeStr = <type>
...
sparkConf.set("es.resource", s"${indexName}/${typeStr}")
sparkConf.set("es.index.auto.create", "no")
sparkConf.set("es.nodes",
driverConf.getStringList("elasticsearch.nodes").mkString(",")
)
val sc = new SparkContext(sparkConf)
val rdd = sc.esRDD(s"${indexName}/${typeStr}}")
elasticsearch.nodes is ["<ip1>:9200", "<ip2>:9200", ... , "<ipN>:9200"]
.
I'm querying http://<ip>:9200/<idx>/<type>
with HEAD and succeed (200 OK).
Thanks in advance!
ps. My index contains some '.' and '-'. Does it matter?