HighVolume Hosts are dropping data

Hi,

We have certain high volume hosts that are sending data to EventHub via winlogbeat. These hosts are dropping some data and I dont see any indication of this in logs.

I see certain outputs.events.failed but dont see any outputs.events.dropped.

Are there any specific configurations are fine tuning that needs to be considered for high volume sources.

queue.mem:
  events: 4096
  flush.min_events: 2048
  flush.timeout: 5s

output.kafka:
  enabled: true
  hosts: [xyz]
  topic: "abct"
  required_acks: 1
  username: "$ConnectionString"
  password: "123"
  compression: none
  ssl.enabled: true
  partition.random:
    reachable_only: false
  keep_alive: 180000
  channel_buffer_size: 512

That means the events got retried and sent eventually according to Understand metrics in Winlogbeat logs | Winlogbeat Reference [8.11] | Elastic

Note that failed events are not lost or dropped; they will be sent back to the publisher pipeline for retrying later.

What indication do you have of dropped data?

I would recommend turning on some additional metric logging with this config that can help with diagnosing.

# Read metrics from:
# curl http://localhost:6061/inputs/
# curl http://localhost:6061/stats
# curl http://localhost:6061/buffer
http:
  host: localhost
  port: 6061
  buffer.enabled: true
  pprof:
    enabled: true

logging:
  metrics:
    namespaces: [stats, dataset]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.