SRC & DST_port visualizations (NETFlow)


(Brayn) #1

Hi,

I'm working with NETFlow data and am making a few visualizations and was hoping to have some questions answered by someone who has more experience with it.

This is what I have right now

If you were to open the legend the info would be there including the specific port numbers, but I would prefer to name the port numbers. e.g instead of showing 443 it would show HTTPS in the legend and tooltip.

Is this possible?


(Brayn) #2

I believe this is not possible on Kibana level, but it should be possible somehow.

I tried to use the following plugin, but it didn't quite work. https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html

filter {
  mutate {
    replace => { "443" => "%{netflow.l4_src_port}: HTTPS" }
  }
}

Anything else I can try to change the value in a field before it reaches kibana?


#3

I'm not a pro , but I believe this cannot be solved in Kibana.

If you use logstash, consider the use of the translate filter.

It will allow you to specify a dictionary like:

"443" : "HTTPS"
"80" : "HTTP"

..which I believe can solve your issue.

Link:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-translate.html


(Brayn) #4

Installed the plugin with/opt/logstash$ bin/plugin install logstash-filter-translate

Configuration of the filter looks like this right now

filter {
  translate {
    dictionary => [ "80", "HTTP",
                    "443", "HTTPS" ]
  }
}

Trying it out atm


(Brayn) #5

Updated my config to the following

filter {
  translate {
    field => "netflow.l4_src_port"
    dictionary => [ "80", "HTTP",
                    "443", "HTTPS",
                    "161", "SNMP" ]
  }
}

I tested the config and it came out OK and restarted the service. After a few minutes of running I checked the field of the new records and it's still just the number. Am I missing something?


(Mark Walkom) #6

It's probably https://www.elastic.co/guide/en/logstash/current/plugins-filters-translate.html#plugins-filters-translate-override


(Brayn) #7

Tried it out and my configuration looks like the following at the moment, but it won't do it.

Could it be because the data is numeric and I'm trying to "translate" it into text? Something with datatypes conflicting perhaps


filter {
translate {
field => "netflow.l4_src_port"
override => "true"
dictionary => [ "80", "HTTP",
"443", "HTTPS",
"161", "SNMP" ]
}
}

this can be moved to Logstash if possible

(Magnus Bäck) #8

Could it be because the data is numeric and I'm trying to "translate" it into text? Something with datatypes conflicting perhaps

Yes, that's most likely the problem. If the field has been mapped as an integer you can't store documents with that field being a string.


(Brayn) #9

Alright, that makes sense.

This is what my mapping for the specific fields look like

              "l4_dst_port" : {
"type" : "long"
},
"l4_src_port" : {
"type" : "long"
},

How can I change the type to "string"? I use indexes that are created daily, anything I need to keep in mind with that?


(Magnus Bäck) #10

Just change your index template and wait for the next day.


(Brayn) #11

Would appreciate some help with changing my index template if possible.

This is what my default template looks like
http://pastebin.com/aCkyHySC

Don't seem to find the field I'm looking for, should I manually add it?


(Magnus Bäck) #12

Yes, you need to add your field. Add it alongside @version, for example.


(system) #13