Hi,
Hope you can help me with the issue I'm having.
Currrent scenario:
We are currently integrating both netflow and sflow in Elastic with logstash version 7.3.2 on a CenOS 7 server. We've managed to visualize netflow correctly but when trying to ingest sflow we are not seeing anything.
For testing purpose, we are basically executing logstash from command line:
On port 6344, we are receiving sflow from a NEXUS 3000 with the following configuration:
feature sflow
sflow sampling-rate 4096
sflow max-sampled-size 128 -- 64
sflow counter-poll-interval 1
sflow max-datagram-size 1400
sflow collector-ip X.X.X.X vrf default
sflow collector-port 6344
sflow agent-ip Y.Y.Y.Y
no sflow extended switch
...
sflow data-source interface ...
Command line:
[root@xxxxxxxx ~]# /usr/share/logstash/bin/logstash -e 'input { udp { port => 6344 }}' --debug
...
[DEBUG] 2019-10-17 18:06:53.529 [Api Webserver] agent - Trying to start WebServer {:port=>9600}
[INFO ] 2019-10-17 18:06:53.591 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:6344", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[DEBUG] 2019-10-17 18:06:53.614 [Api Webserver] service - [api-service] start
[INFO ] 2019-10-17 18:06:53.919 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[DEBUG] 2019-10-17 18:06:53.307 [[main]>worker1] CompiledPipeline - Compiled output
...
[DEBUG] 2019-10-17 18:06:55.577 [pool-3-thread-2] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 18:06:55.580 [pool-3-thread-2] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 18:06:58.261 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 18:07:00.596 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 18:07:00.597 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 18:07:03.261 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 18:07:05.607 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 18:07:05.608 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 18:07:08.261 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 18:07:10.621 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 18:07:10.622 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 18:07:13.261 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
...
(same thing on and on)
As you can see NO data is coming through logstash nor error or warning shown, but if we capture paquets, we can see sflow is coming in through port 6344:
[root@xxxxxxxx ~]# tcpdump -vvv -i any port 6344
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
13:05:30.791669 IP (tos 0x0, ttl 63, id 8334, offset 0, flags [none], proto UDP (17), length 1128)
10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1100
13:05:30.803948 IP (tos 0x0, ttl 63, id 8335, offset 0, flags [none], proto UDP (17), length 1376)
10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1348
13:05:30.815049 IP (tos 0x0, ttl 63, id 8336, offset 0, flags [none], proto UDP (17), length 1316)
10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
...
Same test, but this time listening on port 6343 where we are receiving sFlow traffic from the hsflowd agent installed locally:
[root@xxxxxxxx ~]# /usr/share/logstash/bin/logstash -e 'input { udp { port => 6343 }}' --debug
...
[INFO ] 2019-10-17 18:10:36.509 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-10-17 18:10:36.562 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"}
[DEBUG] 2019-10-17 18:10:36.662 [Api Webserver] agent - Starting puma
[INFO ] 2019-10-17 18:10:36.701 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:6343", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[DEBUG] 2019-10-17 18:10:36.725 [Api Webserver] agent - Trying to start WebServer {:port=>9600}
[DEBUG] 2019-10-17 18:10:36.831 [Api Webserver] service - [api-service] start
[INFO ] 2019-10-17 18:10:37.244 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[DEBUG] 2019-10-17 18:10:36.318 [[main]>worker1] CompiledPipeline - Compiled output
...
[DEBUG] 2019-10-17 18:10:38.826 [pool-3-thread-2] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 18:10:38.829 [pool-3-thread-2] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 18:10:41.376 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[WARN ] 2019-10-17 18:10:43.737 [<udp.1] plain - Received an event that has a different character encoding than you configured. {:text=>"\\u0000\\u0000\\u0000\\u0005\\u0000\\u0000\\u0000\\u0001\\n\\u0001P:\\u0000\\u0001\\x86\\xA0\\u0000\\u0000\\u00021\\u00013\\xAB.\\u0000\\u0000\\u0000\\u..
[DEBUG] 2019-10-17 18:10:43.858 [pool-3-thread-2] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 18:10:43.859 [pool-3-thread-2] jvm - collector name {:name=>"ConcurrentMarkSweep"}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"@version" => "1",
"@timestamp" => 2019-10-17T16:10:43.761Z,
"host" => "10.1.85.21",
"message" => "\\u0000\\u0000\\u0000\\u0005\\u0000\\u0000\\u0000\\u0001\\n\\u0001P:\\u0000\\u0001\\x86\\xA0\\u0000\\u0000\\u00021\\u00013\\xAB.\\u0000\\u0000\\u0000\\u0...
}
[DEBUG] 2019-10-17 18:10:46.376 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
This time we are able to see packets received on port 6343.
[root@xxxxxxxx ~]# tcpdump -vvv -i any -s 0 port 6343
...
17:43:44.105817 IP (tos 0x0, ttl 64, id 38404, offset 0, flags [DF], proto UDP (17), length 772)
elastic-netflow.56817 > elastic-netflow.sflow: [bad udp cksum 0xc12d -> 0x0c08!] sFlowv5, IPv4 agent elastic-netflow, agent-id 100000, seqnum 516, uptime 18543802, samples 1, length 744
counter sample (2), length 708, seqnum 516, type 2, idx 1, records 10
enterprise 0, Unknown (2001) length 36
enterprise 0, Unknown (2010) length 28
...
Any idea? Why are we not able to see the received sflow packets from the NEXUS 3000 device?
Thanks in advance!!!