Netflow - Flow data reaching server, Logstash zero output

Hello,

I have installed an ELK Stack on a Ubuntu 18.04 LXC/Container.

I am sending Netflow v9 flows from a Cisco ASR9000 node to Logstash (end purpose is to use Elastiflow), and the flows are getting to the server (confirmed via tcpdump), however I am stuck with having Logstash not being able reading the data. My Logstash configuration is as basic as I could make it:

root@lxc-ELKNetflow:/etc/logstash/conf.d# more netflow.conf 
input {
  udp {
    port                 => 9995
    codec                => netflow
  }
}
output {
  stdout { codec => rubydebug }
  file {  path => "/tmp/netflow.txt" }
}
root@lxc-ELKNetflow:/etc/logstash/conf.d# 

And I am I am not seeing anything being populated in the /tmp/netflow.txt file or in the syslog log.

-rwxrwxrwx 1 logstash root 0 Aug 16 14:13 netflow.txt

Here is a snippet of logstash-plain.log

[2018-08-16T14:58:31,576][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2018-08-16T14:58:33,775][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x3e73a622 run>"}
[2018-08-16T14:58:50,409][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-16T14:58:53,993][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-16T14:58:54,110][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x56fb171d sleep>"}
[2018-08-16T14:58:54,122][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"0.0.0.0:9995"}
[2018-08-16T14:58:54,170][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"0.0.0.0:9995", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2018-08-16T14:58:54,180][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2018-08-16T14:58:54,477][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
root@lxc-ELKNetflow-HOTComm:/var/log/logstash#

As mentioned, flows are reaching the server:

root@lxc-ELKNetflow:/var/log/logstash# tcpdump -ni eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:13:55.902463 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 1324
15:13:55.904977 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 1392
15:13:55.904986 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 1392
15:13:55.904989 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 1260
15:13:55.904992 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 416
15:13:55.904995 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 416
15:13:55.904997 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 352
15:13:55.905000 IP 192.168.1.250.11602 > 172.29.74.8.9995: UDP, length 92
^C
8 packets captured
13 packets received by filter
5 packets dropped by kernel
root@lxc-ELKNetflow:/var/log/logstash#

I would try explicitly binding the IP, rather than using 0.0.0.0.

Committed the change, no difference :frowning:

root@lxc-ELKNetflow:/etc/logstash/conf.d# cat netflow.conf
input {
udp {
host => "172.29.74.8"
port => 9995
codec => netflow
}
}
output {
stdout { codec => rubydebug }
file { path => "/tmp/netflow.txt" }
}
root@lxc-ELKNetflow:/etc/logstash/conf.d#

Output from logstash-plain.log:

[2018-08-16T15:30:14,637][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2018-08-16T15:30:16,308][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x56fb171d run>"}
[2018-08-16T15:30:36,005][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-16T15:30:36,486][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, } at line 3, column 21 (byte 37) after input {\n udp {\n host\t\t => 172.29", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:42:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:12:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `compile_sources'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:49:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:167:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:305:in `block in converge_state'"]}
[2018-08-16T15:30:36,769][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-08-16T15:31:28,439][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-16T15:31:32,437][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-16T15:31:32,710][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3ac38992 run>"}
[2018-08-16T15:31:32,738][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"172.29.74.8:9995"}
[2018-08-16T15:31:32,783][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2018-08-16T15:31:32,792][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"172.29.74.8:9995", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2018-08-16T15:31:32,984][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
root@lxc-ELKNetflow:/etc/logstash/conf.d#

Regrettably, no log from syslog or anything being populated on /tmp/netflow.txt:

-rwxrwxrwx 1 logstash root 0 Aug 16 14:13 netflow.txt

I am wondering if the netflow codec is deciding that flowset.records is empty. Try taking the codec off the input. Just to see if it is getting the packets. There are lots of paths where the decode would not reach the yield which creates an event.

Still no difference :frowning:

root@lxc-ELKNetflow:/tmp# cat /etc/logstash/conf.d/netflow.conf
input {
udp {
host => "172.29.74.8"
port => 9995
}
}
output {
stdout { codec => rubydebug }
file { path => "/tmp/netflow.txt" }
}
root@lxc-ELKNetflow:/tmp#

root@lxc-ELKNetflow:/tmp# tail -F /var/log/syslog
Aug 16 19:54:32 lxc-ELKNetflow systemd[1]: logstash.service: Failed to reset devices.list: Operation not permitted
Aug 16 19:54:32 lxc-ELKNetflow systemd[1]: Started logstash.
Aug 16 19:54:49 lxc-ELKNetflow logstash[11676]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Aug 16 19:54:51 lxc-ELKNetflow logstash[11676]: [2018-08-16T15:54:51,296][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
Aug 16 19:54:54 lxc-ELKNetflow logstash[11676]: [2018-08-16T15:54:54,225][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Aug 16 19:54:54 lxc-ELKNetflow logstash[11676]: [2018-08-16T15:54:54,337][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x46d0f4dc run>"}
Aug 16 19:54:54 lxc-ELKNetflow logstash[11676]: [2018-08-16T15:54:54,369][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"172.29.74.8:9995"}
Aug 16 19:54:54 lxc-ELKNetflow logstash[11676]: [2018-08-16T15:54:54,416][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
Aug 16 19:54:54 lxc-ELKNetflow logstash[11676]: [2018-08-16T15:54:54,416][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"172.29.74.8:9995", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
Aug 16 19:54:54 lxc-ELKNetflow logstash[11676]: [2018-08-16T15:54:54,609][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

^C
root@lxc-ELKNetflow:/tmp#

root@lxc-ELKNetflow:/tmp# ls -al /tmp/netflow.txt
-rwxrwxrwx 1 logstash root 0 Aug 16 14:13 /tmp/netflow.txt
root@lxc-ELKNetflow:/tmp#

Must be a silly issue. I installed the container today specifically for this purpose. I mean, no FW rules at all:

root@lxc-ELKNetflow:/tmp# iptables -L -nv
Chain INPUT (policy ACCEPT 201K packets, 44M bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 201K packets, 47M bytes)
pkts bytes target prot opt in out source destination
root@lxc-ELKNetflow:/tmp#

I made progress.
I think there are issues when 2 NIC's are being used.

In this case, I had one NIC strictly used for listening for Netflow traffic (Logstash was configured to listen to this subnet) and the other NIC to reach Kibana from a Management subnet. When I removed the management subnet from the picture and configured Kibana to listen on the same subnet as the Netflow traffic, Logstash started to see Netflow traffic in the RubyDebug output.

Very strange and silly issue. Oh well. Perhaps I overlooked something on the configuration of the ELK Stack??
Hopefully this write up helps others. Spent at least a week on this (on and off).

Another observation is that flapping the interface that listens to the Netflow traffic is needed. Not sure why.

Again .. I hope this helps the next person that runs into this issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.