Issue with sending to Kafka 0.10.2.1 over TLS

I've been trying to get Filebeat to work with Kafka over TLS and after numerous tries I've gotten Filebeat to connect and receive metadata, but I just can't get it to send messages to Kafka. My log looks something like this:

2017-12-12T06:56:02Z INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-12-12T06:56:02Z INFO Beat UUID: f69e85fe-d7fa-4a3f-a98b-e7a6216a3600
2017-12-12T06:56:02Z INFO Metrics logging every 30s
2017-12-12T06:56:02Z INFO Setup Beat: filebeat; Version: 6.0.1
2017-12-12T06:56:02Z INFO Beat name: was1
2017-12-12T06:56:02Z INFO filebeat start running.
2017-12-12T06:56:02Z INFO Registry file set to: /var/lib/filebeat/registry
2017-12-12T06:56:02Z INFO Loading registrar data from /var/lib/filebeat/registry
2017-12-12T06:56:02Z INFO States Loaded from registrar: 2
2017-12-12T06:56:02Z WARN Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest 
Node pipelines or are using Logstash pipelines, you can ignore this warning.
2017-12-12T06:56:02Z INFO Loading Prospectors: 2
2017-12-12T06:56:02Z INFO Starting Registrar
2017-12-12T06:56:02Z INFO Starting prospector of type: log; id: 13794605881208891974 
2017-12-12T06:56:02Z INFO Harvester started for file: /log/api_analytic.log
2017-12-12T06:56:02Z INFO Starting prospector of type: log; id: 15380630248052876017 
2017-12-12T06:56:02Z INFO Loading and starting Prospectors completed. Enabled prospectors: 2
2017-12-12T06:56:02Z INFO Harvester started for file: /log/api.log
2017-12-12T06:56:03Z WARN kafka message: Initializing new client
2017-12-12T06:56:03Z WARN client/metadata fetching metadata for all topics from broker kafka-ssl2-elb-vip.xxx.com:59092

2017-12-12T06:56:03Z WARN Connected to broker at kafka-ssl2-elb-vip.xxx.com:59092 (unregistered)

2017-12-12T06:56:03Z WARN client/brokers registered new broker #2 at kafka-ssl2-elb-vip.xxx.com:59092
2017-12-12T06:56:03Z WARN client/brokers registered new broker #1 at kafka-ssl1-elb-vip.xxx.com:59092
2017-12-12T06:56:03Z WARN client/brokers registered new broker #3 at kafka-ssl3-elb-vip.xxx.com:59092
2017-12-12T06:56:03Z WARN kafka message: Successfully initialized new client
2017-12-12T06:56:32Z INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=4194304 beat.memstats.memory_alloc=2984112 beat.memstats.memory_total=3471164848 filebeat.events.active=12 filebeat.events
.added=16 filebeat.events.done=4 filebeat.harvester.open_files=2 filebeat.harvester.running=2 filebeat.harvester.started=2 libbeat.config.module.running=0 libbeat.output.events.active=10 libbeat.output.ev
ents.batches=238606 libbeat.output.events.failed=1539418 libbeat.output.events.total=1539428 libbeat.output.type=kafka libbeat.outputs.kafka.bytes_read=2741 libbeat.outputs.kafka.bytes_write=23 libbeat.pi
peline.clients=2 libbeat.pipeline.events.active=12 libbeat.pipeline.events.filtered=4 libbeat.pipeline.events.published=12 libbeat.pipeline.events.retry=1539426 libbeat.pipeline.events.total=16 registrar.
states.current=2 registrar.states.update=4 registrar.writes=4
2017-12-12T07:15:12Z INFO Harvester started for file: /log/api_analytic.log
2017-12-12T07:15:12Z INFO Harvester started for file: /log/api.log
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send wait signal to consumer
2017-12-12T07:15:13Z INFO   done
2017-12-12T07:15:13Z INFO retryer: send unwait-signal to consumer
2017-12-12T07:15:13Z INFO   done
...

As you can see, I am able to connect to kafka as a consumer but I just keep getting a retryer loop when a new line is added to the log.

My Filebeat config looks like this:

filebeat:
  # List of prospectors to fetch data.
  prospectors:
    - paths:
      - /log/api_analytic.log
      document_type: "api.analytics"
      multiline.pattern:    '^\s+'
      multiline.negate:     false
      multiline.match:      after
      type:           log

    - paths:
      - /log/api.log
      type: log
      document_type: "api"
      multiline.pattern:    '^\s+'
      multiline.negate:     false
      multiline.match:      after

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:
  ### Logstash as output
  kafka:
    hosts: ["kafka-ssl1-elb-vip.xxx.com:59092", "kafka-ssl2-elb-vip.xxx.com:59092", "kafka-ssl3-elb-vip.xxx.com:59092"]
    topic: '%{[type]}'

    partition.hash:
      reachable_only: true
      hash: [ "beat.hostname", "type" ]

    worker: 4
    compression: none

    client_id: "beats"
    version: "0.10.2.1"

    ssl:
      enabled: true
      # List of root certificates for HTTPS server verifications
      certificate_authorities: ["/etc/pki/tls/ca.crt"]

At first I thought Kafka was having issues, but when I run a consumer with the kafka-console-consumer script, I am able to send messages no problem. Hoping I can get some insight into this issue from the community.

Some things to note:

  • Kafka is sitting behind an AWS ELB, but with SSL enabled. Connectivity has been verified using the kafka-console scripts
  • No client authentication is enabled; just server-side encryption
  • I've also tried disabling SSL entirely, and I still get the same issue

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.