i face a really weird issue. I cannot get any netflow logs into logstash. They even dont show up on stdout. i can see with tcpdump dst port 1535 that there are logs incoming. With tcpdump dst port 1535 -A i also see that there is a message block, but it's not readable, on the host. When I start the docker container with -it and bash inside, i can also get the tcpdump dst port 1535 with the same output.
That is the last message in logstash.:
06:26:09.143 [[main]<udp] INFO logstash.inputs.udp - UDP listener started {:address=>"localhost:1535", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
i have no idea how I can solve this issue? Can you help me?
The funny thing is, that i even don't see anything on "stdout" even with --debug on. I don't see anything being dropped, or not saved into. We need logstash to accept the netflow v9, since graylog cannot deal with it.
Ah looks like it is actually IPFIX traffic, so you should either have the Barracuda's export it to your 4739 port, or change the logstash ports around.
Could you let the tcpdump run for a minute or two? Because your pcap doesn't contain any template packets that are needed to decode the data packets.
I'm guessing this is non-production data? If so is it ok if I include a sample from your new pcap in our rspec tests over at github.com/logstash-plugins/logstash-codec-netflow? This helps expand our known-working library of netflow exporters against which we test every release.
still in the logs i dont see any output to stdout
15:55:17.292 [[main]-pipeline-manager] INFO logstash.inputs.tcp - Starting tcp input listener {:address=>"localhost:4739"}
15:55:17.315 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
15:55:17.322 [[main]<udp] INFO logstash.inputs.udp - Starting UDP listener {:address=>"localhost:1535"}
15:55:17.347 [[main]<udp] INFO logstash.inputs.udp - UDP listener started {:address=>"localhost:1535", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
15:55:17.371 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
I can change the IPFIX Template that Barracuda is sending to "Default / Extended UniFlow" it is currently set on "Default without Barracuda Custom Fields and UniFlow". I also can change the "Byte order for data" to little or Big Endian.
I'm not sure what the issue is. These docker containers are given an IP on a private network on the host side right? So the Barracuda firewall cannot reach the container unless there is some port forwarding setup on the host?
it seems like it's more of an issue with docker. But when i start the container with -it and start a tcpdump inside the container on port 1535, i successfully get inputs. I don't know how to debug it further. I even build my own docker with "netflow and gelf" plugin, didn't change a thing, everything is still dropped by logstash. I have no output on stdout, or on the gelf that is linked to graylog.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.