I am using collectd and logstash to push network interface statistics to elastic search and want to visualize actual network bandwidth. The problem is the data sent to elastic search is cumulative data, like total received/sent packets (rx and tx) on the network interface. I wonder how i can de-cumalative the data? The easiest way would be get the diff of the value between two event and divide by the time difference between two log entries.
For example: log entry 1 happens at time t1 with tx1 and rx1. log entry 2 happens at time t2, with tx2, rx2. So the uplink speed would be (tx2-tx1)/(t2-t1). I wonder i i can visualize the uplink speed over time?
Thank you all very much!
As far as I know , thats not possible.
You'll have to calculate the differences when loading the data... Thats unfortunately not where your problems end, because I suspect you'll then end up with number of bytes sent for time period and not a bitrate (as bandwidth is usually described)
Kibana makes things even more difficult because it will/can automatically scale the interval of the graphs, to accommodate small to very large datasets.
If you're happy plotting the "sum of bytes" , then you're gold if you can precalculate the differences . If you know there will always be an entry for every interface for every timeperiod - then you can probably get away with calculating bitrates and plotting average of that.
I ended up having to to code some custom stuff into the kibana backend to plot my bandwidth graphs.
@tbragin, can you or the team share any rough estimates or milestones about when counter metrics will be supported? Is the plan to be able to include timelion graphs as visualizations in kibana dashboards?
Trying to get into timelion specifically to test derivative aggregation, only I seem to have an issue validating my timelion config in the tuturial, what fora would be right to dicuss timelion, here in Kibana I assume?
Hi Keven! Im with same problem, Im sending bandwith snmp information to elasticsearch, but is the cumulative data, from ifHCInOctets and ifHCOutOctets oids. Did you solved it ?
Diego
CleverTap
might also help you to understand howto use ES derivate aggregation You need a parent metric/aggregation (not disaplyed) onto which you can make the derivate metric.
Not sure if I needed to enable inline scripts in my ES cluster with this in elasticsearch.yml:
Thanks, I must read and study more about that, but I test this code with sense and get what i need, now i have to solve how graph it with kibana and json imput.
Yes if you prefer Kibana for this, I thought that you, like I graphed, with grafana as your SD seems to show and as CleverTab can guide you to. Can't help on Kibana with this
Sample of how one of my collectd counter metrics sampled every 300 sec is graphed in Grafana +2.6 through an ES derivate aggregation , again belive you need to enable inline scripting in your ES cluster for this to work
Good, remember you'll have to divide your metric with your sampling interval to get per-sec or per-minute whatever you want (_value/300 for our 5 min interval gives us per-sec values) and then you may want set proper Y-axe Unit under Axes tab to other than 'short' to display it nicely
I've also seen some issues with not-showing graphs initially, but then I do something that'll make the panel refresh. alter a template var, change time zoom a bit... believe it's another issue in Grafana, maybe connected with use of the ES pipeline aggregation, dunno. But at least you have a chance to see derivates instead of counters.
HInt: If you want to trunk of counter reset, which'll give large 'negative' spikes, limit the Y axe value to zero.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.