Hi Folks,
I have done a basic setup/configuration for storing Netflow logs into ElasticSearch using Logstash. My logstash config looks like:
input {
udp {
port => 9995
codec => netflow {
definitions => "<my-path>/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-1.0.0/lib/logstash/codecs/netflow/netflow.yaml"
versions => [9]
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "logstash-netflow9-%{+YYYY.MM.dd}"
host => "localhost"
}
}
Note - right now I am trying to get all netflow logs but would be adding a condition like below in output block around elasticsearch block:
if ( [host] == "XXX.XXX.XXX.XXX" )
With above config, when I run logstash, I don't see any output from logstash on stdout or any new index in elasticsearch. But I see below messages in logstash's logs:
{:timestamp=>"2015-08-19T07:05:07.841000+0000", :message=>"No matching template for flow id 260", :level=>:warn}
{:timestamp=>"2015-08-19T07:05:08.008000+0000", :message=>"No matching template for flow id 265", :level=>:warn}
I have verified using wireshark that netflow data is indeed available on the 9995 port and also checked with network team about the same. The template records are being sent at 5 minute interval and once the template record is received the subsequent records are decoded according to the template as seen in packets captured by wireshark.
I have tried above with elasticsearch 1.3.4 and logstash 1.4.2 (basically following the steps here - http://blogs.cisco.com/security/step-by-step-setup-of-elk-for-netflow-analytics) and things didn't work i.e. no output on stdout or no new index in elasticsearch.
I then tried with latest versions as well - elasticsearch 1.7.1 and logstash 1.5.3 but don't see any data flowing from logstash to elasticsearch or stdout.
I am unable to figure out what is missing and where, given that all the things seem to be in place. Appreciate help in further investigating/resolving this issue.
Thanks.