First of, use only one jar - either the hadoop one or the spark one; not both.
Second, the issue occurs since you are running Elasticsearch in AWS and, as many cloud providers, there's a difference between the advertised, public IP of the nodes and the actual IPs where Elasticsearch runs.
In other words, the connector hits Elasticsearch on a public IP, asks the location of the shards and retrieves an internal EC2 IP which is not accessible from Spark.
This can be fixed by configuring Elasticsearch to use the proper / public IPs for publishing as explained in the reference documentation
Further more, I assume you are familiar with the cloud-aws plugin, if not please try it out.