Bandwidth with netflow Data

Hello,

I'm trying to use Timelion to get the bandwidth from my Netflow Data with this :

.es(index="netflow*", metric="sum:netflow.in_bytes", kibana=true).divide(1048576).mvavg(1m).scale_interval(1s).if(operator="lt", if=0, then=0).trim(start=2,end=1).lines(width=2)

but it doesn't seem accurate because if I change the time range the max bandwidth change.

example : time range last 24h the bandwidth max is 18 Mbps but with a time range of last 48 h the max bandwidth is 12 Mbps and so on.

What did I do wrong ?

Thanks !

Hey @Sylvain-69, your Timelion query is using the sum of the network.in_bytes to calculate the moving average, which would make sense for it to change over time. Would you mind elaborating on why you anticipate to see the same values for the 24h and 48h time ranges?

Hi Brandon,

I'll explain with some pictures, it will be easier.
Here is what I've got with the time range last 4 hours :

we can see that there is a peak at, let say 27 Mbps.

If I select the time range last 12 Hours :

This peak appears to be at 9 Mbps.

and so on, with a time range of last 48 Hours :

Hey,

could you maybe tell us which version of Kibana you are using?

This query looks correct, and I think you expect the right behavior. I tried a similar query, and it works for fine for me. So I would like to rule out, that this was a bug in a previous version, that is already fixed.

Cheers,
Tim

Hello Timroes,

I use the latest version 6.1.1.

Hi,

I checked it and was able to reproduce it. Unfortunately that's expected behavior. Let me shortly explain why. To simplify matters, let's assume you haven't used any divide or moving average, since those anyway don't influence the behavior at all.

Assume, you would just have one document, that has bytes, set to 600.

If you set the interval size (above the play button) to "auto" timelion will automatically determine, what would be a good bucket size (date range per data point). If you now look at the data of the last 48h the data range it might find reasonable will be 5 minutes. Meaning your sum aggregation, will return, that 600 bytes (because we just had that one document), were transferred in those 5 minutes the document is in.

If you now switch over to show the last 12h, it might use 1m as a bucket size, meaning you will still get the very same absolute value, i.e. 600, for that 1 minute bucket, where the document was in.

So if you basically skip the scale_interval you will see the same absolute values, since the same absolute bytes has been transferred (by that one document). If you now apply the scale_interval(1m), timelion needs to calculate:

You are looking at the 48h chart, you have a value of 600 bytes total transferred in a 5 minute bucket, meaning it will divide 600 bytes / 5 minutes = 120 bytes/minute has been transferred in this bucket.

You are looking at the 12h chart, it will calculate 600 bytes / 1 minute = 600 bytes / minute has been transferred in this bucket.

The same of course also will occur with more than one document, but the effect will be more visible the less documents with higher peak values exists in your data. Still it actually is the correct calculation, because there was really a 120 respective 600 bytes/minute bandwidth in that time.

If you don't want this calculation to happen, you could actually specify a fixed interval size in the select above the play button. That way you would make sure, that the bucket size is always the same, and thus the calculation retrieves what you expects. Be aware, that if you chose the interval size too small, you might not be able to select a large overall time range, since it would produce too many buckets.

I hope that explanation could help a bit solving that mystery.

Cheers,
Tim

Yes it helped, thank you !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.