Looking at using the latest filebeat-5.2.2-1 (as of post) and using the output.kafka option. The Kafka "hosts" - brokers for reading cluster metadata - pointed to a VIP on a hardware load balancer which is the front-end of our kafka cluster. We do this for various reason but in this particular scenario it's due to firewall restrictions. Filebeat get's the meta data (list of brokers and their roles) fine but then tries to send events directly to the brokers, not using the "hosts" value. I understand why it's doing this... what I need to know is if there is a way to force filebeat to only send events to the "hosts" value or to a say dedicated broker value? I have played around with the "partition" values but with no success. Am I missing a settings or maybe need a new config feature? Thanks.
I don't think this is possible in our current implementation. But perhaps @steffens has some more ideas?
This is not how kafka is supposed to be used from beats. The kafka hosts configured are used for bootstrapping -> One host is queried for the cluster meta-data which contain information about brokers and topic/partition assignments in kafka. The beats will load-balance events on all partitions.
The meta-data provided by the kafka cluster used during bootstrapping are the hosts clients are supposed to connect to. It's the advertised hostname of each broker being used. It's up to the kafka cluster to report the correct endpoints per topic/partition.
This topic was automatically closed after 21 days. New replies are no longer allowed.