Enabling the sFlow Codec in Logstash Container?

Hi Logstash Jedi Masters,

I recently saw an instance of Logstash 7.7.1 that was accepting sFlow traffic as input. My colleague said he was able to set it up with no problem at all.

I would like to do this, but in a Logstash Docker container. So I downloaded the latest version (7.7.1) and then followed my colleague’s example. After logging into the container as root, I enabled the sFlow codec:

[root@e59b5de66f8a logstash]# /usr/share/logstash/bin/logstash-plugin install logstash-codec-sflow
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules to method sun.nio.ch.NativeThread.signal(long)
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Validating logstash-codec-sflow
Installing logstash-codec-sflow
Installation successful
[root@e59b5de66f8a logstash]#

After a quick reboot, I set this in my config file:

input {
   udp {
      port => 6343
      codec => sflow {}
   }
}

sFlow exports on UDP 6343. Next, I configured /usr/share/logstash/config/logstash.yml to load my config file. When I rebooted Logstash again, I see this in the log:

[2020-06-12T19:23:20,919][INFO ][logstash.inputs.udp ][main][d57837e25c354d7273a126694d7196159b291d4d1e80c122dea7dc9f71d6272b] UDP listener started {:address=>"0.0.0.0:6343", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}

So Logstash seems to be listening on UDP 6343. I installed Tshark on my container, and I see this:

[root@e59b5de66f8a logstash]# ifconfig
...etc...
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.10.100  netmask 255.255.255.0  broadcast 168.161.114.255
...etc...
[root@e59b5de66f8a logstash]#
[root@e59b5de66f8a logstash]#
[root@e59b5de66f8a logstash]# tshark -i eth1
Capturing on 'eth1'
  1 0.000000000 20.20.20.20 -> 10.10.10.100 sFlow 282 V5, agent 20.20.20.20, sub-agent ID 0, seq 839374, 1 samples
  2 1.004427124 20.20.20.20 -> 10.10.10.100 sFlow 386 V5, agent 20.20.20.20, sub-agent ID 0, seq 839375, 2 samples
  3 3.003827262 20.20.20.20 -> 10.10.10.100 sFlow 498 V5, agent 20.20.20.20, sub-agent ID 0, seq 839376, 2 samples
  4 4.005070922 20.20.20.20 -> 10.10.10.100 sFlow 502 V5, agent 20.20.20.20, sub-agent ID 0, seq 839377, 2 samples
^C4 packets captured
[root@e59b5de66f8a logstash]#  

So sFlow packets are arriving at my container, and yes, they are coming in on UDP 6343 (sFlow). The only trouble is, I never see sFlow data in Logstash’s output, regardless it I direct it to STDOUT, a file, or Elasticsearch. Is there something I am missing? I hope so

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.