sgreszcz
(Stephen Greszczyszyn)
August 6, 2019, 3:01pm
7
Was a problem with logstash being on docker bridge network by default. I had to move it to host (to not overwrite the netflow source IP).
services:
logstash:
network_mode: host
image: "docker.elastic.co/logstash/logstash: {
hostname: "{{ansible_hostname}}"
container_name: logstash-netflow
There is a big thread on this:
Hi,
I'm running into a wall when it comes to receiving syslog data from various sources in my network. I think this might be an issue with Docker, but I wanted to get a feel for what others are doing in production before I start to tear my hair out. Googling around has given me varying results of success - the closest I got was something about client IP's not being preserved in Swarm mode (which I'm not using), and a three year old google group post that talked about the exact sa…
opened 08:57AM - 13 Dec 17 UTC
closed 10:53AM - 13 Dec 17 UTC
<!--
This issue tracker is for *bug reports* and *feature requests*.
For quest… ions, and getting help on using docker:
- Docker documentation - https://docs.docker.com
- Docker Forums - https://forums.docker.com
- Docker community Slack - https://dockercommunity.slack.com/ (register here: http://dockr.ly/community)
- Post a question on StackOverflow, using the Docker tag
-->
* [x] This is a bug report
* [ ] This is a feature request
* [x] I searched existing issues before opening this one
(I actually don't know if this is a bug/intended behavior/an error on my part. I also don't know if this is a Docker problem, or a Logstash problem, so I've cross-posted details here: https://discuss.elastic.co/t/logstash-in-docker-syslog-preserving-source-ips/111492)
<!--
DO NOT report security issues publicly! If you suspect you discovered
a security issue, send your report privately to security@docker.com.
-->
### Expected behavior
I'm using `docker.elastic.co/logstash/logstash-oss:6.0.0` on a server that's mainly going to be used for receiving syslog and snmptrap data from primarily networking devices and some other servers.
Mapping a privileged port (514) on my container host to a non-privileged port (5014) on the container should preserve source IP's of my devices that are shipping log lines to the container host.
### Actual behavior
1. Set up syslog/tcp/udp input with port 5014, then set my devices to send to the container host on port 5014. (`-p 5014:5014 -p 5014:5014/udp`)This appears to yield accurate results.
2. Set syslog/tcp/udp input with port 5014 and then run the container with `-p 514:5014 -p 514:5014/udp`. This yields some host IP's being set to the Docker bridge IP, but not all.
3. Set syslog/tcp/udp input with port 514 and then run the container with `-p 514:514 -p 514:514/udp --user=root`. This ALSO yields some IP's being hidden by the Docker bridge IP, but not all.
4. Set syslog/tcp/udp input with port 514 and just run the container with `--user=root --net=host`. This appears to have the desired effect but has the downside of opening said ports on ALL interfaces of my container host. (Yes I realize I can use `iptables` to mitigate this)
*NB:* It doesn't matter if the `syslog` logstash input plugin is used, or just the `udp` input plugin.
### Steps to reproduce the behavior
<!--
Describe the exact steps to reproduce. If possible, provide a *minimum*
reproduction example; take into account that others do not have access
to your private images, source code, and environment.
REMOVE SENSITIVE DATA BEFORE POSTING (replace those parts with "REDACTED")
-->
See actual behavior. I'll attempt to translate to docker one-liners but it might be tough if you don't have a handful of devices/systems outside of the container host to send log messages from. I do have a reliable offending device though. My docker container host is on 10.15.2.11 and this particular switch has an IP of 10.3.3.31
For this use scenario I changed my device config to send syslog to 10.15.2.11 on UDP/5014
```
mkdir pipeline
echo 'input { udp { port => 5014 host => "0.0.0.0" } } filter { syslog_pri { } } output { stdout { codec => rubydebug } }' > pipeline/logstash.conf
docker run -it --rm -p 5014:5014/udp -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash-oss:6.0.0
{
"@timestamp" => 2017-12-13T08:37:48.241Z,
"syslog_severity_code" => 5,
"syslog_facility" => "user-level",
"@version" => "1",
"host" => "10.3.3.31", <---- correct IP
"syslog_facility_code" => 1,
"message" => "SYSLOG_MESSAGE_REDACTED",
"syslog_severity" => "notice"
}
```
Number 2. The `logstash.conf` file created in step 1 does not change. I change my device back to using the log host on the standard UDP/514.
```
docker run -it --rm -p 514:5014/udp -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash-oss:6.0.0
{
"@timestamp" => 2017-12-13T08:45:56.166Z,
"syslog_severity_code" => 5,
"syslog_facility" => "user-level",
"@version" => "1",
"host" => "172.17.0.1", <---- WRONG IP
"syslog_facility_code" => 1,
"message" => "SYSLOG_MESSAGE_REDACTED",
"syslog_severity" => "notice"
}
```
Number 3.
```
echo 'input { udp { port => 514 host => "0.0.0.0" } } filter { syslog_pri { } } output { stdout { codec => rubydebug } }' > pipeline/logstash.conf
docker run -it --rm -p 514:514/udp -v ~/pipeline/:/usr/share/logstash/pipeline/ --user=root docker.elastic.co/logstash/logstash-oss:6.0.0
{
"@timestamp" => 2017-12-13T08:50:11.071Z,
"syslog_severity_code" => 5,
"syslog_facility" => "user-level",
"@version" => "1",
"host" => "172.17.0.1", <---- WRONG IP
"syslog_facility_code" => 1,
"message" => "SYSLOG_MESSAGE_REDACTED",
"syslog_severity" => "notice"
}
```
And last but not least:
```
docker run -it --rm --net=host -v ~/pipeline/:/usr/share/logstash/pipeline/ --user=root docker.elastic.co/logstash/logstash-oss:6.0.0
{
"@timestamp" => 2017-12-13T08:52:11.321Z,
"syslog_severity_code" => 5,
"syslog_facility" => "user-level",
"@version" => "1",
"host" => "10.3.3.31", <---- correct IP
"syslog_facility_code" => 1,
"message" => "SYSLOG_MESSAGE_REDACTED",
"syslog_severity" => "notice"
}
```
**Output of `docker version`:**
```
Client:
Version: 17.09.1-ce
API version: 1.32
Go version: go1.8.3
Git commit: 19e2cf6
Built: Thu Dec 7 22:24:16 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.1-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: 19e2cf6
Built: Thu Dec 7 22:22:56 2017
OS/Arch: linux/amd64
Experimental: false
```
**Output of `docker info`:**
```
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 37
Server Version: 17.09.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.0-4-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 31.38GiB
Name: va1-netops-bastion-01
ID: 7LKX:2PO5:N2ME:VLHH:Z6KQ:WBTD:GCRE:MN7M:YIM2:UPHC:3M4K:ABFR
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
```
**Additional environment details (AWS, VirtualBox, physical, etc.)**
I'm running Debian 9.3 amd64 on a baremetal server.
I have no special networking setup, aside from a couple extra IP'ed interfaces on the server, one of which is Internet-facing, which is why using `--net=host` is undesirable. `tcpdump` on the container host's interface confirms that the messages are coming in with the proper source IP on the wire.
opened 06:52AM - 20 Oct 17 UTC
closed 03:15AM - 14 Dec 17 UTC
Dear colleagues,
I'm currently dockerizing a traffic analyzer application whi… ch I developed in Python and it's already being used in production. I'm almost there, the first step I took was to create the image of my app based on Alpine 3.6. The main dependency for my code to run is the nfdump project (available at https://github.com/phaag/nfdump). This project delivers a collector for Netflow/IPFIX UDP packets called "nfcapd", which runs on port 9995 (properly exposed on my Dockerfile).
The thing is, If I run the container with default network and publishing port 9995, the UDP packets coming from different routers (with different IPs, obviously) all come with the same docker gateway IP (172.18.0.1) -- but I need original source IP address in order to determine from which router the packets belongs to.
So I searched for solutions and stumbled with the --net=host option. If I run the container with run command and passing --net=host, the source IP of the incoming UDP packets is not altered, solving my problem aparently. But then again, the thing is, this container is not running alone in the service. Inside the docker-compose.yml I've created some other services, such as db (InfluxDB) which has to be reachable by the collector container (it collects the packets, process them, and send the results to InfluxDB).
Wrapping it up, If I use the host option for the collector container network, it receives the correct source IP addresses, but in the other hand, it cant communicate to the other containers which runs in the stack. The DNS does not work and I cant resolve the db container name from within the collector container. By obvious reasons I cant rely on IPs, as taught by you, professor, as I expect to run multiple instances of this service to serve different clients from a single machine (swarm is not a requirement in this project).