Logstash in Docker + syslog + preserving source IP's

Hi,

I'm running into a wall when it comes to receiving syslog data from various sources in my network. I think this might be an issue with Docker, but I wanted to get a feel for what others are doing in production before I start to tear my hair out. :slight_smile: Googling around has given me varying results of success - the closest I got was something about client IP's not being preserved in Swarm mode (which I'm not using), and a three year old google group post that talked about the exact same behavior I'm currently seeing, but no resolution was provided.

What I'm seeing is that if I either use the syslog input plugin, or just the generic tcp/udp input plugins with the syslog_pri filter, some of my source IP's are not preserved (but some are...).

When I say "not preserved", I mean that the source IP showing in my "host" field is that of the Docker bridge interface (in my case 172.x.x.1). tcpdump on the network interface does show that the traffic is getting to it with the original source IP, which is leading me to believe that Docker is mucking with it somehow.

Here's what I've tried so far:

  1. Set up syslog/tcp/udp input with port 5014, then set my devices to send to the container host on port 5014. This appears to yield accurate results, but I can't configure all of my equipment to send syslog to a host on a port other than UDP/514 (some kit just doesn't allow for it), so this isn't really an option.

  2. Set syslog/tcp/udp input with port 5014 and then run the container with -p 514:5014 -p 514:5014/udp. This yields some host IP's being set to the Docker bridge IP, but not all (wtf m8).

  3. Set syslog/tcp/udp input with port 514 and then run the container with -p 514:514 -p 514:514/udp --user=root. This ALSO yields some IP's being hidden by the Docker bridge IP, but not all (another wtf)

  4. Set syslog/tcp/udp input with port 514 and just run the container with --user=root --net=host. This appears to have the desired effect, but seems to fly in the face of what the container maintainers had in mind by not running logstash as root - and has the added downside of opening said ports on ALL interfaces of my host.

Has anyone else seen this behavior? Am I missing something obvious? Should I just ditch the idea of running Logstash in a Docker container and just run it as a package on the host itself?

System info as follows:

docker.elastic.co/logstash/logstash-oss:6.0.0 container image
Docker Server Version: 17.09.1-ce
Debian GNU/Linux 9.3 (stretch) amd64 on bare metal

Thanks.

After a ton more time spent Internet digging, it would seem as though preserving source IP's for UDP (and maybe even TCP as well) traffic external to the container host is a known issue. Looks like I won't be running Logstash in a container after all. :slight_smile:






Let's hope a solid fix is figured out.

For posterity's sake, and because I'm an idiot, I didn't realize (at the time) that I could still control the opened ports at the application level while using --net=host just by telling the input plugin to listen on something other than 0.0.0.0

Kudos goes to @fcrisciani for showing me the error of my ways.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.