LogStash & Netflow : Wrong values?


I deployed a LogStash server dedicated to collecting Netflow flows (16 CPU).

The Netflow codec is working, i can see all my datas in the predifined dashboards in Kibana, and i also can graph' it on Grafana.

Everything looks good, except that when i aggregate the data of Netflow to see the bandwithd utilization i have a gap between what ElasticSearch returns me and the reality.

For example, if the graph' of the bandwidth that i got from SNMP (which represents the real value) show that the download/upload is 1.5Gbps/400Mbps during a specific time, the graph' based on Netflow data will maybe show 900Mbps/250Mbps.

The trend however is exactly the same, as well and the bursts that i can see of both graph'...

It's weird, i triple-checked my calculation formula (Netflow returns bytes count per minute, so i have to calculate the Bandwidth by using _value/60*8 to get bits/sec).

Has anyone encountered the same issue?

I have an average of 12k flows per seconds, but the server looks like it can handle it with 16 CPU.
Is there any way to see if it dropps some flows?

Any advices would be appreciated.


Netflow V9 ?
Do you have the last_switched field on your netflow metric ? If yes, could you try to edit @timestamp value with the last_switched field value ?


Thank you for your reply.

Yes, this is Netflow V9.

I will look into the field you mentionned, however i made another post because i discovered that my Netflow LogStash node might drop some UDP packets :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.