Hi all,
I have multiples indexes receiving information from Cisco ASAs & Cisco routers. I created different dashboards with useful information to my company, but i have the same issue in all of these dashboards " bandwidth/time"
Cisco ASAs sent the information with the new format of flows {fwd_netflow_bytes & rev_netflow_bytes)
In these cases i created the following visualization in Kibana:
Here is the Logstash Configuration of one Asa:
input {
udp {
port => 9933
codec => netflow {
versions => [9]
}
}
}
filter {
if [host] == "xxxxxxxx" {
grok {
match => { "host" => "xxxxx" }
}
geoip {
add_tag => [ "geoip" ]
database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat" ### Change me to location of GeoLiteCity.dat file
source => "ipv4_dst_addr"
}
if [geoip][city_name] == "" { mutate { remove_field => "[geoip][city_name]" } }
if [geoip][continent_code] == "" { mutate { remove_field => "[geoip][continent_code]" } }
if [geoip][country_code2] == "" { mutate { remove_field => "[geoip][country_code2]" } }
if [geoip][country_code3] == "" { mutate { remove_field => "[geoip][country_code3]" } }
if [geoip][country_name] == "" { mutate { remove_field => "[geoip][country_name]" } }
if [geoip][latitude] == "" { mutate { remove_field => "[geoip][latitude]" } }
if [geoip][longitude] == "" { mutate { remove_field => "[geoip][longitude]" } }
if [geoip][postal_code] == "" { mutate { remove_field => "[geoip][postal_code]" } }
if [geoip][region_name] == "" { mutate { remove_field => "[geoip][region_name]" } }
if [geoip][time_zone] == "" { mutate { remove_field => "[geoip][time_zone]" } }
}
}
output {
if [host] == "xxxxxx" {
stdout { codec => rubydebug }
elasticsearch {
manage_template => false
index => "logstash-xxxxx_ba%{+YYYY.MM.dd}"
hosts => "xxxxxxxx:9200"
}
}
}
For Cisco Routers I configured Netflow 9 Format ( the template have Out_bytes, but logstash did not receive nothing from "out" , only in_bytes )
anyway I created this visualization in kibana:
As an example here is the logstash configuration for one of the routers:
input {
udp {
port => 9912
codec => netflow {
######################definitions => "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-2.1.1/lib/logstash/cod$
versions => [5]
}
}
}
output {
if [host] == "xxxxxx" {
stdout { codec => rubydebug }
elasticsearch {
manage_template => false
index => "logstash-xxxxxx%{+YYYY.MM.dd}"
hosts => "xxxxxxxl"
sniffing => true
}
}
}
Question ASA:
-
The graphic does not match with the ASDM Cisco Asa Dashboard. Initially I believed that this didn't match for the difference in time between processing time (logstash, elasticsearch, and so on) and ASA.
- If i Change the range of visualization in Kibana Dashboard the graphic showed me a high peak of Sum_bytes, does not match with the reality ( I have Cacti for netflow & Mrtg for the Rates in all the devices ) .
What am I doing wrong? Am I at least on the right path?
Question Routers:
- Why the netflow V9 and also V5 not sent anything about out_bytes ???
I created new templates in Elasticsearch, also checked the logstash-codec-netflow, but its seems like the Routers directly not send anything about out_bytes, because I have the fields out_bytes & out_packets in the indexes wihtout information. [ I have the Egress & ingress flow configured in the interface that i need to look ]
*About In_bytes
The same issue as the Cisco Asa, the graphic does not match with Cacti or Mrtg or directly with the Cache flow in the router.
Finally i want to express you that I reviewed all the post that i found about this, i tried multiples configurations, differents templates, differents codecs in logstash, and nothing...
Maybe Im on the wrong path with this.
Thanks in Advance.