hi,all:
i use elasticsearch2.4、logstash5.6.3 and grafana to monitoring network devices by netflow。but when i create dashboard in grafana to watch output port in someone device, i found sometimes the flow capacity large then the port's max physical capacity. the query dsl like this:
{
"size": 0,
"query": {
"bool": {
"filter": [{
"range": {
"@timestamp": {
"gte": "1510749549284",
"lte": "1510751349284",
"format": "epoch_millis"
}
}
}, {
"query_string": {
"analyze_wildcard": true,
"query": "host:(\"10.30.32.39\") AND netflow.input_snmp:(\"0\")"
}
}]
}
},
"aggs": {
"3": {
"terms": {
"field": "netflow.input_snmp",
"size": 10,
"order": {
"1": "desc"
},
"min_doc_count": 1
},
"aggs": {
"1": {
"sum": {
"field": "netflow.bytes"
}
},
"2": {
"date_histogram": {
"interval": "10s",
"field": "@timestamp",
"min_doc_count": 0,
"extended_bounds": {
"min": "1510749549284",
"max": "1510751349284"
},
"format": "epoch_millis"
},
"aggs": {
"1": {
"sum": {
"field": "netflow.bytes"
}
}
}
}
}
}
}
}
i think the statistical method is wrong. so i modify logstash-codec-netflow source code :
- set field netflow.last_switched and filed netflow.first_switched type to float;
- add a new field netflow.duration = netflow.last_switched - netflow.first_switched;
event[@target]['duration'] = event[@target]['last_switched'] - event[@target]['first_switched']
- set @timestamp = netflow.last_switched ;
event[LogStash::Event::TIMESTAMP] = LogStash::Timestamp.at(seconds, micros).to_iso8601
- add a new field report ;
"report" => LogStash::Timestamp.at(flowset.unix_sec.snapshot, flowset.unix_nsec.snapshot / 1000),
now statistical method is flow capacity = total netflow.bytes / total netflow.duration .
how to compute sum of netflow.bytes / sum of netflow.duration? use scripting ?