Kafka/log.go:53 producer/broker/38402 maximum request accumulated, waiting for space? when I set output.kafka.required_acks: 0?

Hello,
I see lots of "INFO kafka/log.go:53 producer/broker/38322 maximum request accumulated, waiting for space" even when I set output.kafka.required_acks: 0, i am using filebeat 7.2. I thought when I set required_acks: 0, it does not wait for the ack, just send and forget about it, it does not buffer, why it would complain "maximum request accumulated"?
btw, i set "required_acks: 0" to see whether waiting for kafka ACK is the bottleneck of filebeat throughput. After required_ack:0, the throughput did not increase and it still complains about "waiting for space", based on these two reasons, I suspect that "required_acks: 0" is not respected. Does filebeat 7.2 change its config syntax?

my config file looks like the following:

output.kafka:
  hosts: ["..."]
  topic: 'graph-log-exp'
  partition.hash:
  hash: ['beat.hostname', 'source']
  random: true # if false non-hashable events will be dropped
  required_acks: 0
  compression: gzip
  max_message_bytes: 1000000
  ssl.certificate_authorities: ["/etc/riddler/ca-bundle.crt"]
  ssl.certificate: "secure/identity.cert"
  ssl.key: "secure/identity.key"

  bulk_max_size: 262144
  worker: 1
  client_id: in-beat
  channel_buffer_size: 26222400

thanks!
yan

This isn't necessarily a problem, but it depends on your setup. required_acks: 0 will reduce the time it takes to send the data, but you can still get maximum request accumulated if e.g. the network bandwidth is saturated and there are still more events waiting to send. Whether this is a problem depends on how often it happens and what latency is acceptable for your situation.

Kafka itself may also be a bottleneck -- even if you don't wait for all acks to come in, the partition itself can still be overloaded. In this case you might want to change your kafka configuration to use multiple partitions.

@faec we are trying to understand what is the bottleneck of filebeat, which is why we tried to set required_acks: 0 to see whether it would generate higher throughput. The current throughput we got from filebeat sending to kafka is 25MBps, which is much smaller than our hardware and network's capacity.

We excluded that the possibility that the bottleneck is network or kafka brokers for the following reasons:
(1) exclude the bottleneck is network, we scp a big file over the network, got 87MBps.
(2) our kafka broker has quotas 52MBps per client ID per broker, which is also >> 25MBps. and it did not help after we set kafka partition selection to be round-robin.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.