Invalid target URI occurred while reading but such index/type exists

I'm experimenting Spark App which reads whole ES documents from one index.
(ES 1.2.1, Spark 1.3.0, elasticsearch-spark 2.1.0)

During experimenting, logs notify me it fails to query to whole nodes. Please see below.

> 15/07/14 13:18:54 ERROR NetworkClient: Node [Invalid target URI HEAD@null/<idx>/<type>}] failed (<ip1>:9200); selected next node [<ip2>:9200]
> ...
> 15/07/14 13:18:54 ERROR NetworkClient: Node [Invalid target URI HEAD@null/<idx>/<type>}] failed (<ipN>:9200); no other nodes left - aborting...

Below is my app codes which contains short lines.

val sparkConf = new SparkConf().setAppName("error-log-analyzer")
val indexName = <idx>
val typeStr = <type>
...
sparkConf.set("es.resource", s"${indexName}/${typeStr}")
sparkConf.set("es.index.auto.create", "no")
sparkConf.set("es.nodes",
  driverConf.getStringList("elasticsearch.nodes").mkString(",")
)

val sc = new SparkContext(sparkConf)
val rdd = sc.esRDD(s"${indexName}/${typeStr}}")

elasticsearch.nodes is ["<ip1>:9200", "<ip2>:9200", ... , "<ipN>:9200"].
I'm querying http://<ip>:9200/<idx>/<type> with HEAD and succeed (200 OK).

Thanks in advance!

ps. My index contains some '.' and '-'. Does it matter?

Can you enable logging on org.elasticsearch.hadoop.rest all the way to TRACE, re-run your script and post the logs as a gist? It looks like the hosts/nodes are not properly configured - it might be a bug or misconfiguration.
Either way, the connector should provide a better validation instead of the cryptic error message.

Thanks,

Sure, here's gist link.

Was there a resolution found for this? I'm also getting this error. If I set "es.index.auto.create" to "true", I don't seem to get this, but it is something that I don't want to do.