So i use filebeat 6.4.0 - Apache Kafka 2.11-2.0.0
Have you had a look at filebeat logs and metrics?
Filebeat does not drop message that can be published. It retries from forever on error. But 'invalid' events (kafka imposes a size limit per event) must be dropped, so to not block filebeat. Trying to send Java logs to kafka is kind of a red flag here (java logs with stack traces tend to become very big).
Do you run your tests with a complete log file, or do you have an application just dumping logs? In the later case:
- do you have a flush timeout/signal on the appender?
- do you just write logs like crazy? If so, do you have log rotation enabled? Is there a chance a file becomes unavailable before filebeat can pick it up?
Check your kafka broker configs. In beats the default max message size is 1000000 bytes. Dropping these small messages looks like kafka is rejecting them.
Size depends on your logs. What's your current setting in the broker? Given the small sizes I'd say you have to update the broker settings.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.