Filebeat - stalls and effectively hangs when max request size exceeded

I'm using filebeat to send raw JSON documents (one per line) to a hosted (cloud.elastic.co) ES cluster.

The max request size is not configurable server-side, and despite tweaking the batch size in the elasticsearch config in filebeat, I can't find a good balance.

When filebeat encounters a "request too large" error, it simply retries forever, never successfully sending up anything at all. When I kill the process, it restarts and presumably the batch it prepares is different, and things work again, at least for a time.

Is there some filebeat config I can do to remedy this?

Can you share your filebeat configuration and logs? How big is your hosted cluster?

xpack.monitoring.enabled: true
logging.level: info

cloud.id: "xx:yy"
cloud.auth: "zz"

filebeat.registry_file: D:/home/site/filebeatregistry

filebeat.prospectors:
- type: log
  enabled: true
  json.keys_under_root: true
  json.overwrite_keys: true
  paths:
  - D:/local/temp/jobs/continuous/worker/**/safflog_*.txt
  - D:/home/site/wwwroot/**/safflog_*.txt
  
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml    

setup.template:
  name: "filebeat-rr-logs"
  pattern: "filebeat-rr-logs-*"
  settings:
    index:
      number_of_shards: 1
      number_of_replicas: 0

output.elasticsearch:
  bulk_max_size: 2000
  index: "filebeat-rr-logs-%{[beat.version]}-%{+yyyy.ww}"
  compression_level: 2

Logs-wise:

2019-04-26T08:31:41.672Z ERROR elasticsearch/client.go:317 Failed to perform any bulk index operations: 413 Request Entity Too Large:

(just repeats)

Hosted cluster is 8GB RAM / 192GB storage, single zone.. 1 shard, 0 replicas as above.

Thanks!

Elasticsearch complains about the size of the bulk request. Have you tried to reduce bulk_max_size?

Sorry, I didn't see the reply. Yes, but if it's too low, the throughput suffers. It's the occasional very large set of requests that throws things off.

If there was a bulk_max_size_bytes, that would be perfect, of course, but it's a number of items.

The client (filebeat) uses a max number of items, and the server (elasticsearch) use a max request size.. I think that's the issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.