I'm executing Spark againts ElasticSearch using the ElasticSearch API.
I have 6 executors with one core each one. There are not queued tasks. I only have two ElasticNodes with 8 cores and 32 Gb but it seems that they should handle that traffic.
I have checked the elasticsearch logs as well but there aren't any log.
Right now, I have reduce the number of executors to 3 to see what it happens.
Is it really too many producers? it seems that there are not since they are not queued tasks and I checked as well the CPU usage for the ElasticSearch nodes and it's about 30%.
User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 82742.0 failed 4 times, most recent failure: Lost task 4.3 in stage 82742.0 (TID 262382,xxxx): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: Found unrecoverable error [xxx:9200] returned Too Many Requests(429) - rejected execution of org.elasticsearch.transport.TransportService$4@2c70992a on EsThreadPoolExecutor[bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@294f7f8b[Running, pool size = 8, active threads = 8, queued tasks = 50