Filebeat uses internal dns name of Kafka Broker

Hello,

We are trying to harvest csv data using filebeat to Elasticsearch. Data pipeline is set up to stream to Kafka -> Logstash -> Elasticsearch.

We have a hybrid cloud setup of corporate cloud data center and aws. filebeat is installed in corporate data center hosts where as Kafka, logstash and elasticsearch servers are in AWS VPC.

Kafka ip address (aws) is 10.x.x.x which is directly reachable from corporate data center hosts. 10.x.x.x IP address is configured in filebeat.yml. Here is the config file.

filebeat.registry_file: /var/lib/filebeat/registry
filebeat.shutdown_timeout: 30s
filebeat.prospectors:
- input_type: log
  paths:
    - /home/svcload3/s4/logs/perf-UI*.csv
  scan_frequency: 5s
  fields:
    log_type: perflogs
    service: load3
    product: s4
  exclude_lines: ['SourcePage']
output.kafka:
  hosts: ["10.x.x.x:9092", "10.x.x.y:9092", "10.x.x.z:9092"]
  topic: perflogs
processors:
- add_cloud_metadata:

Issue:
In the filebeat logs, we observed that it is trying to connect to the internal dns address of kafka server which is not reachable from corporate data center hosts. How do we resolve this issue?

Filebeat log

2018-01-13T05:31:21-08:00 INFO Failed to connect to broker [[ip-10-x-x-x.us-west-1.compute.internal:9092 dial tcp: lookup ip-10-169-48-77.us-west-1.compute.internal on 10.x.z.z:53: no such host]]: %!s(MISSING)
1 Like

Hi,

Please tell us what version of filebeat are you using, and, if possible, share the debug output (-d '*') of running filebeat until the given error appears.

Thanks

The addresses used, are the addresses advertised by the kafka brokers. You have to fix the advertised listener addresses in your kafka setup.

The kafka connection setup is called Bootstrapping. Only one of your configured hosts will be asked for the kafka cluster its metadata. Connections to actual brokers are based on the kafka cluster metadata.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.