ERR Kafka (topic=filebeat-test-logmiss30): dropping too large message of size 3463

I use filebeat to publish log to kafka, but get
"2018/01/25 08:39:54.536086 client.go:203: ERR Kafka (topic=filebeat-test-logmiss30): dropping too large message of size 3463."

configurations:

  1. Filebeat(6.1.2)
    output.kafka:
    max_message_bytes: 1000000
    version: 0.11.0.0

  2. Kafka(0.11.0.1)
    kakfka broker config
    message.max.bytes: 1000012

Note: Total of 27000 logs, each log size is the same(3k), but half of them are published successful, half of them are dropped.

Can you try setting the output.kafka.bulk_max_size setting in filebeat to 100 to see if the problem is gone?

It seems that Kafka is applying the message.max.bytes limit to the whole batch and not to individual messages.

It works, output.kafka bulk_max_size times max_message_bytes should smaller than Kafka message.max.bytes.

We've been reviewing the code and it seems my original diagnosis is wrong. Changing the bulk_max_size may just be mitigating the problem out of sheer luck, but also reducing the throughput. Is not a good fix.

We would like to investigate this problem further, can you provide us with:

  • Full filebeat.yml
  • Full Kafka broker configuration
  • tcpdump of Kafka traffic when this problem happens

Problem Situation

FileBeat

1.filebeat-kafka.yml

2.config/*.yml

Kafka

kafka broker configurations

broker.id=97
listeners=...
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=...
num.partitions=3
default.replication.factor=2
num.recovery.threads.per.data.dir=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=...
zookeeper.connection.timeout.ms=6000
reserved.broker.max.id=2147483647

Error log

2018/01/29 03:53:29.703780 client.go:203: ERR Kafka (topic=filebeat-test-0129): dropping too large message of size 3446.
2018/01/29 03:53:29.703802 client.go:203: ERR Kafka (topic=filebeat-test-0129): dropping too large message of size 3446.
2018/01/29 03:53:29.703811 client.go:203: ERR Kafka (topic=filebeat-test-0129): dropping too large message of size 3446.

tcpdump
sudo tcpdump -i em1 dst port 20099 -A

=====================================================================================
Fix
To ensure that "output.kafka bulk_max_size times max_message_bytes should smaller than Kafka message.max.bytes", I set the topic-level max.message.bytes to 52428800.

FileBeat

Kafka
./kafka-configs.sh --zookeeper ... --entity-type topics --entity-name filebeat-test-0129 --describe
Configs for topic 'filebeat-test-0129' are max.message.bytes=52428800

Throughput: 25K~30K record/s, 80 Mb/s (log size is 3k)

Please, can you post a packet capture file (pcap) instead of a screenshot?

You can use tcpdump's -w filename.pcap option or use Wireshark's File -> Save.

https://drive.google.com/file/d/1x_IUe0Mj4O59IQesijPGoMvcoyDlvsN4/view?usp=sharing

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.