Bulk_max_body_size support?

An alternative limit to bulk_max_size.
That functions based on the payload size instead.

output:

  ### Elasticsearch as output
  elasticsearch:
    # Array of hosts to connect to.
    hosts: ["${ES_HOST}:${ES_PORT}"]

    # The maximum size to send in a single Elasticsearch bulk API index request.
    bulk_max_body_size: 10M

    # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
    bulk_max_size: 50

This limitation is required due to managed Elasticsearch deployments (such as AWS)
having upload size limits of 10 MB for entry level.

See: http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-limits.html

Because a single multiline message caps at 10MB by default, with 50 for batch processing.
The current "limit" is about 500MB with some overheads.

Currently when this happen a 413 error is perpetually repeated. Specifically the following.

client.go:244: ERR Failed to perform any bulk index operations: 413 Request Entity Too Large

As there is no way to increase the limit on AWS side, nor on the filebeat side,
other then to greatly decrease the max log size, and bulk_max_size.

This greatly limit the configuration options in such situations.

This is currently not supported, but agreed, this features totally makes sense to have (it's a MUST). Feel free to open an enhancement request.

Sure, filed as : https://github.com/elastic/beats/issues/3688

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.