Hi,
I tried to insert 580 docs(~2MB size) to es but I got this message,
when i tried to decrease the number of docs to 50 the job success.
I am working with one spark partition, tried to tweak the batch size and the batch entries,
but didn't succeed to overcome this...
I also succeeded to connect to the es nodes from spark.
org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed;[nodes with ip and port]
Some details:
4 nodes of spark and elastic (4 cores 24GB, elastic heap size 2GB)
spark 1.6.0
es 2.3.3
es-hadoop 2.4.2
@Netanel_Malka In 5.0 and above, the connector logs a sample of errors that it receives from the Elasticsearch cluster. In the mean time, I would increase logging levels to TRACE for your job and inspecting the error messages on the bulk responses. Also be sure to check your Elasticsearch node logs for rejections.
Thanks for the respond.
I found that only when I try to insert a doc with big(thousands of coordinates) geo_shape(), I got this error.
Also, i tried it in elasticsearch 5.2.2 with the suite connector, but the connector not write why it failed.
I see that it finds the appropriate shards(DEBUG) and the partition writer instance on the right address, but then i got this exception again.
Also, When i tried to delete the index i get acknowledge: false, although the index is deleted. But then i can't create any index until I restart the whole cluster...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.