Winlog beat connection issues

Hi,

We are seeing a large number of connections from winlogbeat to EH Kafka. We have increased the keep alive setting as per the kafka recommendation by MS to 180,000 and also changing the partition setting to random.

Is there anything else that we can do to limit the number of connections.

Hi @JJ007

Have you checked the options in the documentation?

Configure the Kafka output | Winlogbeat Reference [8.10] | Elastic

Hi @grfneto ,

Yes and this is what I have. The connections doesn't seem to have gone down.

From what I understand, for each transaction (what constitutes the connection), it batches up 2048 events and sends it to Gateway ( EventHub on Kafka protocol).

I see this in the logs:
},
"outputs": {
"kafka": {
"bytes_read": 3670,
"bytes_write": 662642
}
},
"pipeline": {
"clients": 32,
"events": {
"active": 5,
"published": 178,
"total": 178
},
"queue": {
"acked": 174
}
}
}

How do I parse
"outputs"."kafka".bytes_read": 3670, - read from the mem queue?
"outputs"."kafka". "bytes_write": 662642 - written to consumer?

output.kafka:
enabled: true
hosts: ["xyz"]
topic: "xyz"
required_acks: 1
username: "$ConnectionString"
password: <removed?
compression: none
ssl.enabled: true
partition.random:
reachable_only: false
keep_alive: 180000

max_message_bytes is set to 1MB as per the capacity of broker.

I can increase bulk_flush_frequency but this will introduce latency. In order to make a good estimate need to know the size of the packet/events that is sent. And I am hoping to get that answer from logs?

I have bulk_max_size set to 3500. So from the docs, I understand if there are more than the configured # of events, beat agent will batch is out and have this sent in batches.

But from what I see in the logs, there is a total of 235 events and 20 batches what am i missing here?
Shouldn't the batch count be 0?

libbeat: { [-]
config: { [+]
}
output: { [-]
events: { [-]
acked: 250
active: 0
batches: 20
total: 235
}
}
outputs: { [-]
kafka: { [-]
bytes_read: 6070
bytes_write: 943619
}
}
pipeline: { [-]
clients: 32
events: { [-]
active: 9
published: 244
total: 244
}
queue: { [-]
acked: 250

Even though you have set the bulk_max_size to 3500, it doesn't mean that every batch will contain 3500 events. If, for instance, Filebeat has only collected 500 events, then it will send a batch of 500 events. From the log snippet you shared, the batches: 20 entry under output -> events suggests that 20 batches of events have been sent. However, this doesn't provide details on the size of each batch. You can use the bulk_flush_frequency to increase per-flush efficiency.

Generally, in Kafka, you should see N connections (with N being the number of brokers assigned to a topic) plus an additional one for metadata, per producer. This is a simplification, as the actual connection dynamics can be more complex, but it provides a basic idea. I understand you are using eventHub and levering Kafka client.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.