Netflow module

Hello,

I am testing netflow module on elk stack and not getting any where.

Can someone let me know how to configure logstash to get the netflow data and send to elastic so that i can see the data on kibana

Just use an elasticsearch output plugin. No special configuration is necessary for Netflow data.

I must disagree with @magnusbaeck. I would argue that anyone who thinks that the output of the Netflow codec is sufficient, as-is, doesn't have any serious use of Netflow data.

With additional parsing, formatting and enrichment of the data, significant additional insights can be attained. The Netflow module tries to cover a few of the very basics. However ElastiFlow and other solutions offer a lot more functionality.

1 Like

I must disagree with @magnusbaeck. I would argue that anyone who thinks that the output of the Netflow codec is sufficient, as-is, doesn't have any serious use of Netflow data.

I had a feeling you were going to jump in but I was too lazy for nuance. I just meant that the elasticsearch output doesn't require any particular configuration for Netflow. I'm sure a custom index template might be useful but for someone who isn't getting anywhere that's premature optimization.

Fair enough. A simple UDP input with codec and stdout to make sure flows are coming and being decoded by the codec is also a good place to start.

I was just explaining to someone today that when integrating new data I don't even mess around with Elasticsearch or Kibana until I can look at the results via stdout and see that it looks how I want it too. Elasticsearch and Kibana is less than 10% of the total effort. 90% of the effort is getting the data right. The rest of it, even the index templates, is easy once the data is done right.

I hope I didn't seem too combative. Sorry.

1 Like

I hope I didn't seem too combative. Sorry.

No, not at all.

Hi Robert,
I have couple of questions

1.Can’t I achieve same functionality without using
Elastoflow.

2.with usual ELK stack I cannot parse the data and get the same thing what elastiflow is doing

3.iam getting below errror when configuring Netflow and I cannot see logstash is gettting netflow data.in the tcpdump I see the netflow data

Is that something could you please help and appreciate your support and reply..thanks

input {
udp {
port => 2055
codec => netflow
}
}

output {
elasticsearch {
hosts => "localhost:9200"
}
}

Error

[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,854][WARN ][logstash.codecs.netflow ] Template length exceeds flowset length, skipping {:template_id=>256, :template_length=>62, :record_length=>61}
[2018-05-09T08:41:32,974][WARN ][logstash.codecs.netflow ] Ignoring Netflow version v0
[2018-05-09T08:41:32,975][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.05.09", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x56d4c4fc], :response=>{"index"=>{"_index"=>"logstash-2018.05.09", "_type"=>"doc", "_id"=>"BGsQRGMBozRH_W8JDrkp", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.in_bytes]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (14925492115059245056) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@51113252; line: 1, column: 169]"}}}}}
[2018-05-09T08:41:33,053][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.05.09", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x7a547207], :response=>{"index"=>{"_index"=>"logstash-2018.05.09", "_type"=>"doc", "_id"=>"f2sQRGMBozRH_W8JDrl2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.in_pkts]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (9295429630892703744) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@e28c6ff; line: 1, column: 207]"}}}}}

Get Outlook for iOS

1.Can’t I achieve same functionality without using
Elastoflow.

2.with usual ELK stack I cannot parse the data and get the same thing what elastiflow is doing

Given enough knowledge, skills and time there is nothing that can't be replicated. It is just a question of how much you want to invest in getting to that point. By the questions someone asks I think you can get a good idea of how much time they are truly willing to invest.

[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.
[2018-05-09T08:41:32,851][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 300 from source id 3203342338, because no template to decode it with has been received. This message will usually go away after 1 minute.

Netflow v9 and IPFIX have a flexible payload structure. Sources will periodically send "templates" which tell the collector (in this case Logstash and the Netflow codec) how to decode the payload. The above messages just indicate that a template hasn't yet been received.

[2018-05-09T08:41:32,854][WARN ][logstash.codecs.netflow ] Template length exceeds flowset length, skipping {:template_id=>256, :template_length=>62, :record_length=>61}
[2018-05-09T08:41:32,974][WARN ][logstash.codecs.netflow ] Ignoring Netflow version v0
[2018-05-09T08:41:32,975][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.05.09", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x56d4c4fc], :response=>{"index"=>{"_index"=>"logstash-2018.05.09", "_type"=>"doc", "_id"=>"BGsQRGMBozRH_W8JDrkp", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.in_bytes]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (14925492115059245056) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@51113252; line: 1, column: 169]"}}}}}
[2018-05-09T08:41:33,053][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.05.09", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x7a547207], :response=>{"index"=>{"_index"=>"logstash-2018.05.09", "_type"=>"doc", "_id"=>"f2sQRGMBozRH_W8JDrl2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.in_pkts]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (9295429630892703744) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@e28c6ff; line: 1, column: 207]"}}}}}

These messages are all related. Once the template was received, it didn't seem to match with the flow records it was supposed to describe. This mismatch is what is causing the errors with indexing to Elasticsearch, as the data clearly cannot be decoded properly using the template.

The one thing that can cause such a problem is if two devices are sending the same template (in your case flowset id 300), but they are configured to include to different data in those templates. Otherwise the issue maybe related to to malformed flow records (bug in the device), or content in the flow record that is not handled by the codec.

Either way, you should open an issue on the GitHub repository for the Netflow codec. You will need to send a PCAP of your flow data so that it can be properly investigated.

Thanks Robert for the reply.i have another thing,while i do netstat -an | grep 2055 i dont see logstash is listening on port 2055.

i see the flows are getting received on tcpdump but i dont see logstash is receiving the netflow traffic

I can't say anything about your netstat output without seeing more info. However, there isn't really a question of whether or not you are receiving netflow packets. You must be, or you wouldn't have the related error messages in the Logs.

i have to change my source to send the netflow data,once i changed logstash is not receiving the data

[root@ logstash]# netstat -an | grep 2055
[root@ logstash]#

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.