FSCrawler - Indexing mix of Big and small files - HTTP Entity too large error

The problem is here:

"indexed_chars" : "-1"

You are asking to extract the whole text content. Not sure how much it represents actually but that's may be too much anyway.
There's a limit on Elasticsearch side, which is by default 100mb. And I would not recommend increasing this limit unless you know exactly what you are doing.

Instead of using bulk_size, you could use byte_size:

elasticsearch.byte_size: 80mb

See Elasticsearch settings — FSCrawler 2.10-SNAPSHOT documentation

HTH