I deployed a LogStash server dedicated to collecting Netflow flows (16 CPU).
The Netflow codec is working, i can see all my datas in the predifined dashboards in Kibana, and i also can graph' it on Grafana.
Everything looks good, except that when i aggregate the data of Netflow to see the bandwithd utilization i have a gap between what ElasticSearch returns me and the reality.
For example, if the graph' of the bandwidth that i got from SNMP (which represents the real value) show that the download/upload is 1.5Gbps/400Mbps during a specific time, the graph' based on Netflow data will maybe show 900Mbps/250Mbps.
The trend however is exactly the same, as well and the bursts that i can see of both graph'...
It's weird, i triple-checked my calculation formula (Netflow returns bytes count per minute, so i have to calculate the Bandwidth by using _value/60*8 to get bits/sec).
Has anyone encountered the same issue?
I have an average of 12k flows per seconds, but the server looks like it can handle it with 16 CPU.
Is there any way to see if it dropps some flows?
Netflow V9 ?
Do you have the last_switched field on your netflow metric ? If yes, could you try to edit @timestamp value with the last_switched field value ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.