Im using Filebeat(5.6.3) connecting to Kafka(0.10.2.1).
However, i got an error as below in filebeat:
> 2017-11-29T17:12:28+08:00 ERR Kafka (topic=message): dropping too large message of large size 3038538.
It means i have some messages discarded while going into Kakfa because of large message size.
Then I try to fix this by my Filebeat yml file, i added a line under
But not work, the error still occur.
Where is the config that i should correct for this error? Or should i amend the config file in Kafka?
Kafka itself enforces a limit on message sizes. You will have to update the kafka brokers to allow for bigger messages.
beats kafka output checks the JSON encoded event size. If the size hits the limit in the output, the event is dropped
max_bytes setting sets the log message size. The encoded event can be much bigger, due to additional fields + string escaping. That is
max_bytes should be somewhat smaller then the max event size allowed by kafka and the kafka output in beats.
If you are fine with the default event limit in kafka, try reducing
max_bytes somewhat more.
My problem was solved by 2 configurations:
- filebeat yml file, under "output.kafka:",
- kafka server properties,
And my logstash conf behind was also amended:
max_partition_fetch_bytes =>" "
hope this could help someone.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.