Filebeat does not correctly resolve container IP

(Grantcurell) #1

I'm setting up a Filebeat container with Kafka output to another container on the same back end docker network. I can confirm the container itself appropriately resolve the kafka container with:

[root@rocksensor1 ~]# docker exec -it rock-filebeat ping rock-kafka
PING rock-kafka ( 56(84) bytes of data.
64 bytes from rock-kafka.rocknsm_inside ( icmp_seq=1 ttl=64 time=0.196 ms

However, the Kafka plugin seems to be using Google's DNS and I don't know why. I must have something misconfigured somewhere, but I'm not sure where:

filebeat -e -c /opt/rocknsm/filebeat/
2018/01/16 22:46:12.897173 harvester.go:215: INFO Harvester started for file: /var/log/suricata/eve.json
2018/01/16 22:46:13.299824 log.go:36: INFO kafka message: [Initializing new client]
2018/01/16 22:46:13.299893 log.go:36: INFO client/metadata fetching metadata for all topics from 
broker [[rock-kafka:9092]]
2018/01/16 22:46:13.400410 log.go:36: INFO Failed to connect to broker [[rock-kafka:9092 dial tcp: 
lookup rock-kafka on no such host]]: %!s(MISSING)
2018/01/16 22:46:13.400485 log.go:36: INFO kafka message: [client/metadata got error from broker 
while fetching metadata: dial tcp: lookup rock-kafka on no such host]

#=========================== Filebeat prospectors =============================

- type: log
    - /var/log/suricata/eve.json
  json.keys_under_root: true
    kafka_topic: suricata-raw
  fields_under_root: true
- type: log
    - /data/fsf/rockout.log
  json.keys_under_root: true
    kafka_topic: fsf-raw
  fields_under_root: true
 - decode_json_fields:
     fields: ["message","Scan Time", "Filename", "objects", "Source", "meta", "Alert" ,"Summary"]
     process_array: true
     max_depth: 10

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

  hosts: ["rock-kafka:9092"]

  topic: '%{[kafka_topic]}'
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

(Grantcurell) #2

Wait. No I've done something silly. This isn't the issue, but something is still wrong. One moment.

(Grantcurell) #3

I fixed it.

(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.