Can't get Kibana to work for graphing Netflow data


(Fayyaadh) #1

Hi all

I'm a complete n00b at the ELK stack and I feel like I'm out of my depth.

Basically I want to be able to send Netflow data from my DD-WRT router to logstash and then off to elasticsearch and Kibana for some nice graphs.

I read through the Logstash and elasticsearch guides to try to get a grip on the basics and I managed to get data in from the router but for the life of me I can't figure out why Kibana can't graph anything.

Anyways, here's some configs.

This is what I have for my logstash config:

input {
	udp {
		port => 2055
		codec => netflow
		}
	}

output {
	elasticsearch {
		hosts => [ "localhost:9200" ]
		index => "ddwrt-%{+YYYY.MM.dd}"
	}
}

When I output to stdout instead of elasticsearch then I see records like these coming in from the router:

{
      "@version" => "1",
          "host" => "192.168.0.1",
       "netflow" => {
                    "dst_as" => 0,
                   "in_pkts" => 9,
            "first_switched" => "2017-09-16T19:03:50.999Z",
             "ipv4_next_hop" => "0.0.0.0",
               "l4_src_port" => 52953,
        "sampling_algorithm" => 0,
                  "in_bytes" => 2283,
                  "protocol" => 17,
                 "tcp_flags" => 0,
               "l4_dst_port" => 1900,
                    "src_as" => 0,
               "output_snmp" => 0,
                  "dst_mask" => 0,
             "ipv4_dst_addr" => "239.255.255.250",
                   "src_tos" => 0,
                  "src_mask" => 0,
                   "version" => 5,
              "flow_seq_num" => 412486,
              "flow_records" => 1,
             "ipv4_src_addr" => "192.168.0.200",
               "engine_type" => 0,
                 "engine_id" => 0,
                "input_snmp" => 1,
             "last_switched" => "2017-09-16T19:03:50.999Z",
         "sampling_interval" => 0
    },
    "@timestamp" => 2017-09-16T19:04:05.000Z
}

Seems correct so far?

For elasticsearch, when I navigate to http://localhost:9200/_cat/indices?v then I see this:

health status index                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   ddwrt-2017.09.16          k1GAKtrRRVKtdhl-YPxxDw   5   1       1515            0    841.6kb        841.6kb
yellow open   .kibana                   k4GoPKJWSWyrjx5qWCZV9A   1   1          2            0      8.3kb          8.3kb

Again, seems correct and the index is there.

But when I tell Kibana to use the pattern ddwrt*, I can choose the @timestamp field for the time but I can't get any graphs or anything to appear.

Under the 'Discover' tab, no fields show up unless I uncheck "Hide Missing Fields". And then when I click a field, they all say this:

This field is present in your elasticsearch mapping but not in any documents in the search results. You may still be able to visualize or search on it.

What am I missing or doing wrong?


(Mark Walkom) #2

FYI we’ve renamed ELK to the Elastic Stack, otherwise Beats feels left out :wink:

What version are you running? Do you have the right timeframe in the picker, top right?


(Fayyaadh) #3

Hi Mark

Should've renamed it BELK then. :joy:

I'm running the latest version.

It was the timeframe issue. I just started sending Netflow data via the router and I could see logstash and elasticsearch getting the data, so I had a few minutes worth of data. But if I selected "Last 15 mins" or "Last hour" nothing showed up. However, selecting "Today" worked and all the documents showed up.

Why is that? Shouldn't the former 2 options just show the stuff that was logged within those time frames?


(Mark Walkom) #4

It's based on the @timestamp field.
I don't know how the netflow codec works in detail, but it may be doing something with timezone conversion.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.