I'm running a test environment where I have ELK set up and receiving syslogs and netflow from a router. Everything is coming in well and I am trying to make some visualizations.
I feel like the fields I'm given on netflow are very limited and I can't even select them when creating visualizations. Am I missing something?
Are all of the fields you're trying to visualize nested under "netflow."? If so, can you flatten that data out when you index it in elasticsearch?
Also, Kibana automatically determines and caches the mappings for an index at index pattern creation time. If you think there are fields in the index that are not showing up at all in Kibana, hit the "refresh mappings" button on the index pattern in the settings tab.
I used the command curl 'localhost:9200/_cat/indices?v' to check for all indexes on Elasticsearch and it returned me a bunch of them.
I noticed that there were two called logstash-netflow9-2016.04.04 and the other logstash-netflow9-2016.04.05. These were probably made by the config file as I mentioned an index with date in the end at the output section. Is this correct?
You mentioned that I should flatten out the data, how can I do that?
Found out the following, when I use curl -XGET localhost:9200/_template/logstash_netflow9?pretty to take a look at the template I notice the new fields showing up on Available Fields on the left after a refresh of Kibana, but they dissapear shortly after again.
This is what my configuration looks like now after taking your advice
input {
tcp {
port => 2222
codec => netflow
type => "netflow"
}
udp {
port => 2222
codec => netflow
type => "netflow"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "netflow-%{+YYYY.MM.dd}"
}
}
When checking Kibana I noticed a new entry showing netflow with all the new fields. It was from DHCP I believe as port 67 and 68 were involved.
Thought I got it working until I refreshed again a few minutes later and it was all gone. Available fields as well as the entry. There's still tons of other netflow entries like the one in my original post.
Am I missing something? Do these only happen on occasion and if so, will the fields only show up during it? After all I'm testing with a single router that doesn't deal with too much traffic right now.
Could you tell me what version of Elasticsearch you are using?
Since you have seen a few entries with all the necessary fields, I assume that Logstash works fine decoding Netflow data. If your entries are in Elasticsearch, they shouldn't disappear for no reason. One possibility that causes such weird behavior is the timestamp of the entry in ES. I got similar issue that data disappeared when I refreshed Kibana Discover.
Could you run the followings and post the outputs:
"@timestamp":{
"type":"date",
"index":"analyzed",
"format":"strict_date_optional_time||epoch_millis"
```
This mapping may be causing you trouble. I'm not sure if having `"index": "analyzed"` in a date data type is appropriate, someone else can confirm this.
One way to try is to delete the index pattern in Kibana, recreate it but use either **first_switched** or **last_switched** as the timestamp field. If you can see the events normally, then it's your mapping issue.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.