I have the same issue, I originally had Logstash 5.6.2 and got the santized error message. I upgraded to 5.6.3 and now get the reported error message.
[logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>
I'm importing data from Sql using the JDBC. Is this where a message is too big for Elasticsearch and it fails? I have another Logstash input which is running with smaller document sizes and it processes for hours before that eventually freezes. Obviously this is another issue but resolving this issue is crucial.
BTW I have a 3 node cluster and the error messages appears for all of them.