Upgraded from 5.6.5 to 6.2.4, can't stream to kafka

After I upgraded from 5.6.5 to 6.2.4, I can't get my data into Kafka (1.0.0). Went through breaking changes but the only thing I could find is maybe the sarama library is broken against kafka 1.0.0?

The error messages are the following:

2018-05-16T12:58:23.791Z	INFO	kafka/log.go:36	kafka message: [Initializing new client]
2018-05-16T12:58:23.791Z	INFO	kafka/log.go:36	client/metadata fetching metadata for all topics from broker [[kafka0002:9092]]

2018-05-16T12:58:23.794Z	INFO	kafka/log.go:36	Connected to broker at [[kafka0002.9092]] (unregistered)

2018-05-16T12:58:23.795Z	INFO	kafka/log.go:36	client/brokers registered new broker #[[657 %!d(string=kafka0003:9092)]] at %!s(MISSING)
2018-05-16T12:58:23.795Z	INFO	kafka/log.go:36	client/brokers registered new broker #[[656 %!d(string=kafka0002:9092)]] at %!s(MISSING)
2018-05-16T12:58:23.795Z	INFO	kafka/log.go:36	client/brokers registered new broker #[[655 %!d(string=kafka0001:9092)]] at %!s(MISSING)
2018-05-16T12:58:23.795Z	INFO	kafka/log.go:36	kafka message: [Successfully initialized new client]
2018-05-16T12:58:23.796Z	INFO	kafka/log.go:36	producer/broker/[[657]] starting up

2018-05-16T12:58:23.796Z	INFO	kafka/log.go:36	producer/broker/[[657 %!d(string=access_ats_edge) 4]] state change to [open] on %!s(MISSING)/%!d(MISSING)

2018-05-16T12:58:23.796Z	INFO	kafka/log.go:36	producer/broker/[[657 %!d(string=access_ats_edge) 1]] state change to [open] on %!s(MISSING)/%!d(MISSING)

Configuration is fairly simple...

filebeat:
  prospectors:
    - type: log
      paths:
        - "/opt/trafficserver/var/log/trafficserver/custom_ats_2.log"
      document_type: custom_ats_2
      tail_files: false
      fields:
        cachegroup: us-ga-atlanta
        cdn: cdn1
        cachetype: edge
      #fields_under_root: true
      close_inactive: 1h
      ignore_older: 24h

output:
  kafka:
    hosts: ["kafka0002:9092"]
    topic: access_ats_edge
    partition.round_robin:
      reachable_only: false
    required_acks: 1
    compression: gzip
    max_message_bytes: 1000000


logging:
  level: info
  to_files: true
  to_syslog: false
  files:
    path: /var/log/filebeat
    name: filebeat.log
    keepfiles: 7

The weird messages you're receiving seems to be caused by a broken log implementation on our side (there will be a fix for this in 6.3). Nothing that should cause Kafka to stop working.

Do you see any other errors in Kafka or is it just the messages not going through?

Can you re-run the beat with debug log enabled ( -e -d '*' ) and paste your results here?

Somehow messages were not sent to kafka. I moved to a kafka 1.0.1 cluster (brand new) and that worked fine. Somehow something is broken on my other cluster. The printout %!s(MISSING)/%!d(MISSING) are still present but streaming is working.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.