Graphing Network utilisation from SNMP ifHCInOctets

Hi, I am new to ELK, and trying to get Kibana to generate network utilisation graphs from SNMP ifHCInOctets.

I have collectd querying SNMP ifHCInOctets and IfHCOutOctets and sending tx and rx values into Logstash every two minutes.
The values of these (tx,rx) values matches the counters on the switch.

I have generated line graphs using the derivative of the counter values, and the line graph shape reflects the shape of the rate generated from (much) older MRTG system, however the scale/units looks very different.

Example legacy MRTG graph, Screenshot%20from%202019-01-17%2012-04-38

and example

using ELK

However the Y-axis units look very different. Additionally they ELK generated values look very different from querying switch utilisation from it's CLi, whilst the legacy MRTG are very similar to those from the switch CLI.

Can anyone advise me on this, or alternate ways to achieve this, i.e. graph network utilisation from SNMP ifHCInOctets counters.

Thanks in advance,

@timroes can we please get some help here?

Thanks,
Bhavya

To make sure values are formatted as bytes, you can go to the index pattern under Management for that index you are visualizing, edit the field that you're visualizing and select "Bytes" as a formatter in that screen. That will make the numbers to be presented as bytes too, which would match more up with the (I think) Icinga graph you have been posted.

Besides that the graphs really look the same to me, except that of course the Kibana chart is a bit heigher in your screenshot and tuhs looks a bit more stretched.

Cheers,
Tim

Hi Tim, Thanks for the help, it's really useful

I'm sorry that I didn't explain the issue very well intially.

The issue is that actual rates calculated seem to be incorrect. ( whilst yes, the shape is the same )...

MRTG tells me that max rate in the above graphs was less than 800Mbits/s - which seems reasonable and accurate to me for a 5min average, whilst when i do the calculation with Kibana, I see a peak just less than 40,000,000,000 ( which is far larger than the possible rate can be, especially if the units are Bytes/s).

I attach the request, which I hope shows how the values are being calculated

{
  "aggs": {
    "2": {
      "date_histogram": {
        "field": "@timestamp",
        "interval": "10m",
        "time_zone": "Europe/London",
        "min_doc_count": 0
      },
      "aggs": {
        "3": {
          "derivative": {
            "buckets_path": "3-metric"
          }
        },
        "4": {
          "derivative": {
            "buckets_path": "4-metric"
          }
        },
        "3-metric": {
          "max": {
            "field": "rx"
          }
        },
        "4-metric": {
          "max": {
            "field": "tx"
          }
        }
      }
    }
  },
  "size": 0,
  "_source": {
    "excludes": []
  },
  "stored_fields": [
    "*"
  ],
  "script_fields": {},
  "docvalue_fields": [
    {
      "field": "@timestamp",
      "format": "date_time"
    },
    {
      "field": "received_at",
      "format": "date_time"
    }
  ],
  "query": {
    "bool": {
      "must": [
        {
          "range": {
            "@timestamp": {
              "gte": 1548021602416,
              "lte": 1548151202416,
              "format": "epoch_millis"
            }
          }
        },
        {
          "match_phrase": {
            "type": {
              "query": "collectd"
            }
          }
        },
        {
          "match_phrase": {
            "type_instance": {
              "query": "trafficport-channel1"
            }
          }
        },
        {
          "match_phrase": {
            "host": {
              "query": "tango1"
            }
          }
        }
      ],
      "filter": [
        {
          "match_all": {}
        },
        {
          "match_all": {}
        }
      ],
      "should": [],
      "must_not": []
    }
  }
}

I think the main issue here is, that Kibana won't calculate those values down to a per second basis. You are using a date interval of 10minutes, so you're getting the values for every 10 minutes, which is significantly higher than per second of course. You can work around that, by specifying an interval size of 1s for the date histogram. Kibana will still not create that many buckets, but will scale the values to be represented as 1s. Alternatively you could use Timelion (https://www.timroes.de/2017/08/02/timelion-tutorial-from-zero-to-hero/) and use the scale_interval(1s) function in your expression to calculate all values to the base of per second.

Cheers,
Tim

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.