Netflow visualizations Kibana 4 - Limited fields?

(Brayn) #1


I'm running a test environment where I have ELK set up and receiving syslogs and netflow from a router. Everything is coming in well and I am trying to make some visualizations.

I feel like the fields I'm given on netflow are very limited and I can't even select them when creating visualizations. Am I missing something?

An example of a visualization that I'm trying to make is a simple pie chart with SRC ports or SRC ips like here

Is it because my test environment is very limited and simply can't provide those fields for me to work with?

(Court Ewing) #2

Are all of the fields you're trying to visualize nested under "netflow."? If so, can you flatten that data out when you index it in elasticsearch?

Also, Kibana automatically determines and caches the mappings for an index at index pattern creation time. If you think there are fields in the index that are not showing up at all in Kibana, hit the "refresh mappings" button on the index pattern in the settings tab.

(Brayn) #3

Hi, appreciate your reply.

I used the command curl 'localhost:9200/_cat/indices?v' to check for all indexes on Elasticsearch and it returned me a bunch of them.

I noticed that there were two called logstash-netflow9-2016.04.04 and the other logstash-netflow9-2016.04.05. These were probably made by the config file as I mentioned an index with date in the end at the output section. Is this correct?

You mentioned that I should flatten out the data, how can I do that?

My goal in the end would be having the fields ipv4_src_addr, ipv4_dst_addr, l4_src_port, l4_dst_port and more just like in (As reference, I use a Juniper router and did not follow this)

(Brayn) #4

Found out the following, when I use curl -XGET localhost:9200/_template/logstash_netflow9?pretty to take a look at the template I notice the new fields showing up on Available Fields on the left after a refresh of Kibana, but they dissapear shortly after again.

This is what the template looks like:

Do I need to tell something that this is the template I was to be using indefinitely for netflow and have all incoming flows being shown up as it?

(Anh) #5

It looks to me that not all data are sent from Logstash to ELK. You should turn on debuging view in Logstash config first to see if all necessary Netflow fields reach Logstash and get processed correctly. >

I wrote a series about processing Netflow with ELK: Hope this will help you.

(Brayn) #6

This is what my configuration looks like now after taking your advice

input {
  tcp {
    port => 2222
    codec => netflow
    type => "netflow"
  udp {
    port => 2222
    codec => netflow
    type => "netflow"

output {
  stdout {
    codec => rubydebug

 elasticsearch {
    hosts => ["localhost:9200"]
    index => "netflow-%{+YYYY.MM.dd}"

When checking Kibana I noticed a new entry showing netflow with all the new fields. It was from DHCP I believe as port 67 and 68 were involved.

Thought I got it working until I refreshed again a few minutes later and it was all gone. Available fields as well as the entry. There's still tons of other netflow entries like the one in my original post.

Am I missing something? Do these only happen on occasion and if so, will the fields only show up during it? After all I'm testing with a single router that doesn't deal with too much traffic right now.

(Anh) #7

Could you tell me what version of Elasticsearch you are using?

Since you have seen a few entries with all the necessary fields, I assume that Logstash works fine decoding Netflow data. If your entries are in Elasticsearch, they shouldn't disappear for no reason. One possibility that causes such weird behavior is the timestamp of the entry in ES. I got similar issue that data disappeared when I refreshed Kibana Discover.

Could you run the followings and post the outputs:

curl -XGET 'localhsot:9200/logstash-netflow9-2016.04.05/_mapping'
curl -XGET 'localhsot:9200/logstash-netflow9-2016.04.05/_search'

Also, did you see any errors in ES log at indexing time?

(Brayn) #8

Thanks for your reply.

I am using Elasticsearch 2.2.1

_mapping output:

_search output:

I have not noticed any errors in the logs from the last day or so, I'll take another look to make sure when I can.

(Anh) #9
This mapping may be causing you trouble. I'm not sure if having `"index": "analyzed"` in a date data type is appropriate, someone else can confirm this.

One way to try is to delete the index pattern in Kibana, recreate it but use either **first_switched** or **last_switched** as the timestamp field. If you can see the events normally, then it's your mapping issue.

(system) #11