Writing to an ece run elastic from elastic-spark connector fails

I have about 600M rows or 200GB of data being written from Hive to elastic via elastic-spark connectors.

Things start out fine and slow down progressively and around mid point I get the following exception on the spark client side.

Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or pr
oxy settings)- all nodes failed; tried tried [[http://77eba63e96b349a88b3ce7560af43d4a.1.2.3.4.ip.es.io:9200]]

My elastic cluster is a 4 node cluster with 8 GB RAM each.

I have not setup load balancer in front of the proxy, but specified all the proxies via the es.nodes=http://1.2.3.4:9200, http:2.3.4.5:9200 etc

Any tips on how to troubleshoot?

I would suggest you to check cluster's metrics and logs. Logs and metrics are shipped to cluster logging-and-metrics. On the cluster details page you can find links that navigate to Kibana assigned to the logging cluster with pre-defined filters.

The cluster working fine after wards, what are we looking for in the metrics cluster anything specific.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.