No. The error is not a pushback but rather the connector giving up.
The term pushback doesn't really apply here since there's no bi-directional communication between Hadoop and the connector - the connector cannot say, there's too much data, slow down. The connector is simply told to write data and in case of Elasticsearch load, it keeps retrying, each time with the amount of the data that remains (typically less and less) giving Elasticsearch breathing space and holding the Hadoop job from writing more data.
To reiterate what I was saying in my previous post: it's only Elasticsearch that pushes back. The connector takes that into account through es.batch.write.retry.count
and es.batch.write.retry.wait
- that is, wait a bit and retry again.
Hadoop doesn't support push back hence the connector cannot push data back to the source but rather stall the pipeline which is not ideal but somewhat effective (depending on how many tasks one has).