Netflow Logstash V9 not working not getting any logs

Hallo Guys,

i face a really weird issue. I cannot get any netflow logs into logstash. They even dont show up on stdout. i can see with tcpdump dst port 1535 that there are logs incoming. With tcpdump dst port 1535 -A i also see that there is a message block, but it's not readable, on the host. When I start the docker container with -it and bash inside, i can also get the tcpdump dst port 1535 with the same output.

That is the last message in logstash.:
06:26:09.143 [[main]<udp] INFO logstash.inputs.udp - UDP listener started {:address=>"localhost:1535", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}

i have no idea how I can solve this issue? Can you help me?

08:58:07.037483 IP netflowsender.53980 > logstashserver.1535: UDP, length 596
E..pA.@.?...
c..
b.
.....+k.
.TYT...7.........D.....
.p......*#................................................................*#.p..............................................................
c.K..........................................................................
c.K..............................................................
c....
...5................................................E...E..........5
c....................................................E...E.......
.p......#*................ o.................. 3....... ..E...E...........#*.p..................................................E...E..

Config File.:

input {
udp {
host => localhost
port => 1535
codec => netflow {
versions => [9]
netflow_definitions => "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-3.4.0/lib/logstash/codecs/netflow/netflow.yaml"
}
type => netflow
}
udp {
host => localhost
port => 1536
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
tcp {
host => localhost
port => 4739
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
}

output {
stdout { codec => rubydebug }
gelf {
host => 'graylog'
port => 12202
}
}

Logstash Version: 5.4.2 running on Docker in Debian.

Change

host -> "locahost"

To

host => "your_IP_address"

And config flow point to : your_IP_address:1535

Can you send me a .pcap of your Netflow traffic so I can take a look at it?

Also, that device(s) are you exporting from?

input {
udp {
host => 10.98.241.10
port => 1535
codec => netflow {
versions => [9]
netflow_definitions => "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-3.4.0/lib/logstash/codecs/netflow/netflow.yaml"
}
type => netflow
}
udp {
host => localhost
port => 1536
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
tcp {
host => localhost
port => 4739
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
}

output {
stdout { codec => rubydebug }
gelf {
host => 'graylog'
port => 12202
}
}

sorry i don't know what "config flow point" exactly is?

Barracuda Firewalls. Of course i can give you a pcap.
pcap.: https://www.dropbox.com/s/ck2x020c4d08zm3/tcpdump.pcap?dl=0

The funny thing is, that i even don't see anything on "stdout" even with --debug on. I don't see anything being dropped, or not saved into. We need logstash to accept the netflow v9, since graylog cannot deal with it.

Ah looks like it is actually IPFIX traffic, so you should either have the Barracuda's export it to your 4739 port, or change the logstash ports around.

Could you let the tcpdump run for a minute or two? Because your pcap doesn't contain any template packets that are needed to decode the data packets.

I'm guessing this is non-production data? If so is it ok if I include a sample from your new pcap in our rspec tests over at github.com/logstash-plugins/logstash-codec-netflow? This helps expand our known-working library of netflow exporters against which we test every release.

1 Like

i changed the input in the config file.

here you go for the pcap.. https://www.dropbox.com/s/vu4gxlxlv6hgm1a/tcpdumpnonprod.pcap?dl=0 if it aint enough just tell me. You can add it to github. I searched for a folder where i can put it in. Sadly i didn't found one. I would have loved to contribute :wink:

still in the logs i dont see any output to stdout
15:55:17.292 [[main]-pipeline-manager] INFO logstash.inputs.tcp - Starting tcp input listener {:address=>"localhost:4739"}
15:55:17.315 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
15:55:17.322 [[main]<udp] INFO logstash.inputs.udp - Starting UDP listener {:address=>"localhost:1535"}
15:55:17.347 [[main]<udp] INFO logstash.inputs.udp - UDP listener started {:address=>"localhost:1535", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
15:55:17.371 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

I can change the IPFIX Template that Barracuda is sending to "Default / Extended UniFlow" it is currently set on "Default without Barracuda Custom Fields and UniFlow". I also can change the "Byte order for data" to little or Big Endian.

Ok I can replay your pcap to my Logstash instance.
One sample of what I get:

{
       "netflow" => {
          "destinationIPv4Address" => "10.99.252.50",
                 "octetTotalCount" => 65,
        "destinationTransportPort" => 53,
              "flowStartSysUpTime" => 2395375053,
               "sourceIPv4Address" => "10.99.130.239",
                "flowEndSysUpTime" => 2395395322,
        "flowDurationMilliseconds" => 20269,
                "ingressInterface" => 48660,
                         "version" => 10,
                "packetDeltaCount" => 0,
                   "firewallEvent" => 2,
              "protocolIdentifier" => 17,
                "sourceMacAddress" => "00:00:00:00:00:00",
                 "egressInterface" => 26092,
                 "octetDeltaCount" => 0,
             "sourceTransportPort" => 65105,
                "packetTotalCount" => 1
    },
    "@timestamp" => 2017-06-29T13:58:28.000Z,
      "@version" => "1",
          "host" => "172.16.32.201",
          "tags" => []
}

I'm not sure what the issue is. These docker containers are given an IP on a private network on the host side right? So the Barracuda firewall cannot reach the container unless there is some port forwarding setup on the host?

Thanks for the pcap:

yes i actually start the container with the following run command.:

"docker container run -v /srv/logstash/config-dir:/config-dir -v /graylog/data/logstash:/usr/share/logstash/data --link graylog:graylog -p 1535:1535/udp --name logstash pkahr/docker-logstash-gelf -f /config-dir/logstash.conf"

it seems like it's more of an issue with docker. But when i start the container with -it and start a tcpdump inside the container on port 1535, i successfully get inputs. I don't know how to debug it further. I even build my own docker with "netflow and gelf" plugin, didn't change a thing, everything is still dropped by logstash. I have no output on stdout, or on the gelf that is linked to graylog.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.