Filebeat as a UDP Syslog Listener Dropping Alot of Logs

Scott,

IMHO This is not a queuing problem it is a throughput problem.

Adding a disk queue is not going to help, that is for network interruptions / store and forward. You might get a momentary pop, but that won't solve the main throughput issue. Here are the docs anyways.

tl;dr I don't think you are going to get where you want to go with a single Filebeat with UDP, if you only have a single UDP endpoint I would recommend Logstash.

I did some testing with UDP and TCP and FIlebeat, I think there is a throughput ceiling I am running into that is less than 8-10Mbs you will need (3K message / sec at 3K per message), I am not sure you are going to get there.

Yup I think you are right Filebeat is limited on the ingest side (and perhaps processing) ... which after some research I think I have better understanding why.

I have other Filebeats shipping ~5K EPS but those messages are about 400-500b so that ends up back around that 2-2.5MBs.

The way I scale higher is multiple beats in parallel, how to do that with UDP I am unclear, not a network guy.

Filebeat historically was meant to be a lightweight shipper distributed across many edge devices / not a "Fat Pipe" shipper, now it is becoming a cloud / network endpoint (FW etc), I have been told there have been discussion around scaling input (+ entire processing chain) at some point but today scaling is via multiple beats.

I think you may need to horizontally scale Filebeat (more beats, again not sure how with UDP) OR ... or use Logstash which is probably a more to be a scalable pipe today.

A properly sized Logstash with UDP input and correct config (multiple workers which in logstash is the full pipeline) to the Logstash output would probably work much better.

Logstash has a long history of forwarding / processing syslog, FW logs etc.

Good Hunting...let us know where you end up.

1 Like

So -- I scaled them horizontally.

I installed 5 different filebeats on different UDP ports and it's working now.

I think I still have an issue though with the biggest firewall -- I have it sending to it's own dedicated filebeat but I think it's still hitting a cap of sorts.

Any guesses on how to increase throughput for that one?

Heres what it looks like btw:

Nice Progress....

But seems we are back to the original issue :slight_smile: Max UDP througput for a single Filebeat now with the Big FW.

With UDP + Filebeat I do not know how to solve this / vertically scale a single filebeat.

With TCP you could load balance to multiple Filebeats.

With UDP I will go back to Logstash should be able to handle that volume no problem.

You can load balance UDP to multiple Filebeat instances. Just use NGiNX and add a config similar to the following...

stream {
  upstream syslog {
    server 127.0.0.1:51401; # Filebeat 1
    server 127.0.0.1:51402; # Filebeat 2
    server 127.0.0.1:51403; # Filebeat 3
    server 127.0.0.1:51404; # Filebeat 4
  }

  server {
    listen 514 udp;
    proxy_pass syslog;
    proxy_buffer_size 65536k;
    proxy_timeout 1s;
    proxy_responses 0;
    proxy_bind $remote_addr transparent;
  }
}

4 Likes

Thanks @rcowart gonna put that in my tool chest, I mentioned above not a networking maven, always good to learn something new.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.