I had setup a 3 node clusters and my kafka / zookeeper are running
i can create a topic and insert msg to the topic and read its output
- filebeat version 7.12.0 (amd64), libbeat 7.12.0
- kafka 2.13-2.7.0
- zookeeper 3.5.8
However, filebeat is not able to output to kafka.
Here's my filebeat config
filebeat.inputs:
-
type: log
enabled: true -
/xxxx/xxxxx/xxxxxx/xxx/xxxx.txt
filebeat.config.modules:
Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
Set to true to enable config reloading
reload.enabled: true
Period on which files under path should be checked for changes
reload.period: 10s
output.kafka:
enabled: true
hosts: ["192.100.100.120:9092", "192.100.100.122:9092", "192.100.100.123:9092"]
topic: TutorialTopic
#topic: %{[TutorialTopic]}
Authentication details. Password is required if username is set.
username: 'xxxxxx'
password: 'xxxxxx'
i can start filebeat and from the logs
"mac": [
"00:0c:29:02:xx:xx",
"00:0c:29:02:xx:xx"
],
"hostname": "xxxxxxx"
}
}
2021-07-11T21:51:04.644+0800 DEBUG [harvester] log/log.go:107 End of file reached: /xxxxxx.txt; Backoff now.
i do have 1 error
ERROR [kafka] kafka/client.go:317 Kafka (topic=TutorialTopic): kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Any advises are deeply appreciated.