An alternative limit to bulk_max_size.
That functions based on the payload size instead.
output:
### Elasticsearch as output
elasticsearch:
# Array of hosts to connect to.
hosts: ["${ES_HOST}:${ES_PORT}"]
# The maximum size to send in a single Elasticsearch bulk API index request.
bulk_max_body_size: 10M
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
bulk_max_size: 50
This limitation is required due to managed Elasticsearch deployments (such as AWS)
having upload size limits of 10 MB for entry level.
See: http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-limits.html
Because a single multiline message caps at 10MB by default, with 50 for batch processing.
The current "limit" is about 500MB with some overheads.
Currently when this happen a 413 error is perpetually repeated. Specifically the following.
client.go:244: ERR Failed to perform any bulk index operations: 413 Request Entity Too Large
As there is no way to increase the limit on AWS side, nor on the filebeat side,
other then to greatly decrease the max log size, and bulk_max_size.
This greatly limit the configuration options in such situations.