Huge packets sent from filebeat to ES

Hi all,

I'm using the netflow module on filebeat v8.8.0 to send netflow traffic to ES. The incoming netflow packets are all about 5KB to 6KB. When I did a tcpdump on the interface that is sending the netflow data to ES, I noticed that a lot of the outgoing packets are 64KB.

Is this normal? Is there some sort of data buffering before sending it out to ES to result in the large packets?

I did have the following settings in my filebeat config:

queue.mem.events: 64000
queue.mem.flush.min_events: 4000
output.elasticsearch.bulk_max_size: 4000

and in the netflow.yml:

queue_size: 64000

I'm currently trying to improve the ingest rate and reduce the packet drop reported by filebeat, hence, I'm wondering if there might be any bottleneck in the network.

Thank you.

Yes, filebeat "collects" events to bulk ingest them. There is some compression etc.
Ingests tend to use the bulk api's to improve throughput.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.