Metricbeat Kafka Module "error in connect: EOF"

Using the Metricbeat Kafka module, I get the following errors:

2019-08-01T20:31:02.479+0200    INFO    kafka/log.go:53 Connected to broker at localhost:9093 (unregistered)
2019-08-01T20:31:02.731+0200    INFO    kafka/log.go:53 Closed connection to broker localhost:9093
2019-08-01T20:31:02.731+0200    INFO    module/wrapper.go:244   Error fetching data for metricset kafka.consumergroup: error in connect: EOF
2019-08-01T20:31:10.539+0200    INFO    kafka/log.go:53 Connected to broker at localhost:9093 (unregistered)
2019-08-01T20:31:10.791+0200    INFO    kafka/log.go:53 Closed connection to broker localhost:9093
2019-08-01T20:31:10.791+0200    INFO    module/wrapper.go:244   Error fetching data for metricset kafka.partition: error in connect: EOF
2019-08-01T20:31:11.971+0200    INFO    kafka/log.go:53 Connected to broker at localhost:9093 (unregistered)

This keeps on repeating forever, even though Kafka is up and running . I don't see any metrics being forwarded to Elasticsearch, only documents containing the exact same error messages.

I use Metricbeat version 7.2 with the following configuration :

  - module: kafka
    metricsets: ["consumergroup", "partition"]
    period: 10s
    hosts: ["localhost:9093"]
    enabled: true

Please let me know if you have any idea what might be the reason or how to fix it.

2 Likes

Is your Kafka port 9092 or 9093? Maybe try

- module: kafka
  metricsets: ["consumergroup", "partition"]
  period: 10s
  hosts: ["localhost:9092"]
  enabled: true

Hi @Kaiyan_Sheng I am having the same problem. I do not think it is because of the kafka port, because the error will be

error in connect: No advertised broker with address kafka:9093 found

This is also telling us that the connection is set up, right?
2019-08-01T20:31:02.479+0200 INFO kafka/log.go:53 Connected to broker at localhost:9093 (unregistered)

Please correct me if I am wrong. Currently I am out of idea, how to fix this.
Thank you already for your help

Yes you are right, thanks @Salohy! I tried to reproduce it on my side with kafka 2.1.0 and I only see

2019-08-28T09:08:07.285-0600    INFO    kafka/log.go:53 Connected to broker at localhost:9092 (unregistered)

2019-08-28T09:08:07.303-0600    INFO    kafka/log.go:53 Closed connection to broker localhost:9092

I can't reproduce the EOF log though. Are you still seeing the same issue? By looking at the metricbeat code, seems like this error happens when trying to connect with a kafka broker or when querying metadata from broker. The error log is not clear enough where it happened exactly, so I will definitely create a PR to improve that.

What version of Kafka are you using? On google I see some similar cases for Kafka like https://github.com/Shopify/sarama/issues/1071, maybe this will help?

I have the same problem without kafka, but system module. The only common case is that I have opened a non standard port to communicate with logstash. Here my specs:

metricbeat.yml

metricbeat.modules:
- module: system
  metricsets:
   # - cpu             # CPU usage
    #- load            # CPU load averages
    #- memory          # Memory usage
    #- network         # Network IO
    - process         # Per process metrics
    - process_summary # Process summary
   # - uptime          # System Uptime
   # - socket_summary  # Socket summary
    #- core           # Per CPU core usage
    #- diskio         # Disk IO
    #- filesystem     # File system usage for each mountpoint
    #- fsstat         # File system summary metrics
    #- raid           # Raid
    #- socket         # Sockets and connection info (linux only)
  enabled: true
  period: 10s
  processes: ['.*']

  # Configure the metric types that are included by these metricsets.
  cpu.metrics:  ["percentages"]  # The other available options are normalized_percentages and ticks.
  core.metrics: ["percentages"]  # The other available option is ticks.

  # A list of filesystem types to ignore. The filesystem metricset will not
  # collect data from filesystems matching any of the specified types, and
  # fsstats will not include data from these filesystems in its summary stats.
  # If not set, types associated to virtual filesystems are automatically
  # added when this information is available in the system (e.g. the list of
  # `nodev` types in `/proc/filesystem`).
  #filesystem.ignore_types: []

  # These options allow you to filter out all processes that are not
  # in the top N by CPU or memory, in order to reduce the number of documents created.
  # If both the `by_cpu` and `by_memory` options are used, the union of the two sets
  # is included.
  #process.include_top_n:

    # Set to false to disable this feature and include all processes
    #enabled: true

    # How many processes to include from the top by CPU. The processes are sorted
    # by the `system.process.cpu.total.pct` field.
    #by_cpu: 0

    # How many processes to include from the top by memory. The processes are sorted
    # by the `system.process.memory.rss.bytes` field.
    #by_memory: 0

  # If false, cmdline of a process is not cached.
  #process.cmdline.cache.enabled: true

  # Enable collection of cgroup metrics from processes on Linux.
  process.cgroups.enabled: true

  # A list of regular expressions used to whitelist environment variables
  # reported with the process metricset's events. Defaults to empty.
  #process.env.whitelist: []

  # Include the cumulative CPU tick values with the process metrics. Defaults
  # to false.
  #process.include_cpu_ticks: false

  # Raid mount point to monitor
  #raid.mount_point: '/'

  # Configure reverse DNS lookup on remote IP addresses in the socket metricset.
  #socket.reverse_lookup.enabled: false
  #socket.reverse_lookup.success_ttl: 60s
  #socket.reverse_lookup.failure_ttl: 60s

  # Diskio configurations
  #diskio.include_devices: []
  setup.dashboards.enabled: true

output.logstash:

  hosts: ['localhost:5044']
  ssl.certificate_authorities:
    - ./cert/MyRootCA.pem
  ssl.certificate: "./cert/metricbeat.pem"
  ssl.key: "./cert/metricbeat-key.pem"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.