Filebeat does not correctly resolve container IP

I'm setting up a Filebeat container with Kafka output to another container on the same back end docker network. I can confirm the container itself appropriately resolve the kafka container with:

[root@rocksensor1 ~]# docker exec -it rock-filebeat ping rock-kafka
PING rock-kafka (172.19.0.4) 56(84) bytes of data.
64 bytes from rock-kafka.rocknsm_inside (172.19.0.4): icmp_seq=1 ttl=64 time=0.196 ms

However, the Kafka plugin seems to be using Google's DNS and I don't know why. I must have something misconfigured somewhere, but I'm not sure where:

filebeat -e -c /opt/rocknsm/filebeat/
2018/01/16 22:46:12.897173 harvester.go:215: INFO Harvester started for file: /var/log/suricata/eve.json
2018/01/16 22:46:13.299824 log.go:36: INFO kafka message: [Initializing new client]
2018/01/16 22:46:13.299893 log.go:36: INFO client/metadata fetching metadata for all topics from 
broker [[rock-kafka:9092]]
2018/01/16 22:46:13.400410 log.go:36: INFO Failed to connect to broker [[rock-kafka:9092 dial tcp: 
lookup rock-kafka on 8.8.8.8:53: no such host]]: %!s(MISSING)
2018/01/16 22:46:13.400485 log.go:36: INFO kafka message: [client/metadata got error from broker 
while fetching metadata: dial tcp: lookup rock-kafka on 8.8.8.8:53: no such host]

Config:
#=========================== Filebeat prospectors =============================

filebeat.prospectors:
- type: log
  paths:
    - /var/log/suricata/eve.json
  json.keys_under_root: true
  fields:
    kafka_topic: suricata-raw
  fields_under_root: true
- type: log
  paths:
    - /data/fsf/rockout.log
  json.keys_under_root: true
  fields:
    kafka_topic: fsf-raw
  fields_under_root: true
processors:
 - decode_json_fields:
     fields: ["message","Scan Time", "Filename", "objects", "Source", "meta", "Alert" ,"Summary"]
     process_array: true
     max_depth: 10

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

output.kafka:
  hosts: ["rock-kafka:9092"]

  topic: '%{[kafka_topic]}'
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

Wait. No I've done something silly. This isn't the issue, but something is still wrong. One moment.

I fixed it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.