This (appears to) work but when I look at a short timeframe, lets say 15 minutes the graph is very spiky and shows too high bandwidth utilization but when I change the time frame to lets say 4 hours it looks okay.
When I change the interval to 1m it looks okay as well but the problem with that is that I can't use that because it will create too many buckets on anything over a couple of days so I want it to auto scale.
This is a screenshot from a +/- 500MB file download, downloading steady at around 1mbit. With a four hour scale the graph looks good but on a lower scale the graph is incorrect.
.fit doesn't really change anything. There is no documentation on exactly what scale or carry is supposed to do either.
The problem is that after years of people asking Kibana/timelion still does not appear to support xxx / per second properly. Scale_interval should do that but it's not accounting for netflow records not coming in per second.
E.g. if I start a large file download I can see that every minute or so a record is created with out_bytes being 50MB and other smaller records are created as well with e.g. web browsing. The problem is that the large 50MB chunks are fed into elastic every minute and the logic timelion is applying to it is incorrect.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.