Auditbeat on docker fails to run auditd module

Hi!

I'm setting up Auditbeat to run on amazon linux EC2 instance.
When I run the default install and config for auditbeat, everything works fine for auditbeat auditd module and I can configure my rules to be implemented.

BUT: When I attempt the same auditbeat.yml config for my docker setup I get the message that:

2021-09-16T08:06:51.167Z ERROR [auditd] auditd/audit_linux.go:171 Failure adding audit rules {"error": "Skipping rule configuration: Audit rules are locked", "errorVerbose": "Skipping rule configuration: Audit rules are locked\ngithub.com).addRules\n\t/go/src/github.com/elastic/beats/auditbeat/module/auditd/audit_linux.go:271\ngithub.com).Run\n\t/go/src/github.com/elastic/beats/auditbeat/module/auditd/audit_linux.go:169\ngithub.com).run\n\t/go/src/github.com/elastic/beats/metricbeat/mb/module/wrapper.go:203\ngithub.com - This website is for sale! - ngithub Resources and Information.).Start.func1\n\t/go/src/github.com/elastic/beats/metricbeat/mb/module/wrapper.go:147\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371"}
[root@ip-10-243-224-144 ec2-user]#

My auditbeat.yml config is:


# =========================== Modules configuration ============================
auditbeat.modules:

- module: auditd
  #
  resolve_ids: true
  failure_mode: silent
  backlog_limit: 8196
  rate_limit: 0
  include_raw_message: false
  include_warnings: false
  backpressure_strategy: auto
  #
  #Load audit rules from separate files. Same format as audit.rules(7).
  processors:
  #Add additional procesors here
    - add_fields:
        target: tags
        fields:
          type: 'module: auditd'
          #processors:
    # The below only works if systemid is already stored as an env variable
    - add_fields:
        target: tags
        fields:
          systemid: ${systemid}

  audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
  #audit_rules: |

- module: system
  datasets:
    - host
    - login
    - user
  period: 1m
  state.period: 24h
  user.detect_password_changes: true

- module: system
  datasets:
    - process
    - socket
  period: 1m
# ================================= Processors =================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_process_metadata:
    #  match_pids: [system.process.ppid]
    # target: system.process.parent

My docker-compose config file is:

version: "3.8"

# https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-overview.html
# Does not look like Auditd is supported in Alpine linux: https://github.com/linuxkit/linuxkit/issues/52

services:

  auditbeat:
    user: root
    pid: host
    cap_add:
      - AUDIT_CONTROL
      - AUDIT_READ
      - CAP_SYS_ADMIN # This is an insecure option for getting access to /sys and bind mount tracefs debugfs
      - CAP_NET_ADMIN #This is needed in order for auditbeat to monitor using socket
    container_name: auditbeat
    hostname: auditbeat
    restart: always
    image: docker.elastic.co/beats/auditbeat:${ELASTIC_VERSION:-7.14.1}
    volumes:
      - /var/log:/var/log:ro
      #Allows us to report on docker from the hosts information.
      - /var/run/docker.sock:/var/run/docker.sock
      - ./auditbeat_v5_docker_elk.yml:/usr/share/auditbeat/auditbeat.yml:ro
      - ./klarna_auditd_conf.yaml:/usr/share/auditbeat/audit.rules.d/klarna_auditd_conf.yaml:ro
      #- /Volumes/GoogleDrive/My Drive/Tools/Humio/Auditbeat/Lab_setup/docker/elk_version/auditbeat_v5_docker_elk.yml:/usr/share/auditbeat/auditbeat.yml:ro
      #- /Volumes/GoogleDrive/My Drive/Tools/Humio/Auditbeat/Lab_setup/auditd/klarna_auditd_conf.yaml:/usr/share/auditbeat/audit.rules.d/klarna_auditd_conf.yaml:ro
    #environment:
      # - ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST:-node1}
      # - KIBANA_HOST=${KIBANA_HOST:-node1}
      # - ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-elastic}
      # - ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD:-changeme}
    command: auditbeat -e -strict.perms=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    stdin_open: true
    tty: true
    #network_mode: bridge
    networks:
      - auditbeat
    deploy:
      mode: global
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "50"
networks:
  auditbeat:
    external: true

Do you have auditbeat on the host to run auditbeat show auditd-status and share the output? Or do you have auditctl available on the host to run auditctl -s?

Hi Andrew!

I run the setup on two separate instance, but with the same base config and AMI.
Hence the only difference is that I run Auditbeat on docker, which isn't working with Audtid, on one instance and Auditbeat via systemd on the other.

Here is the output for "auditctl -s" for the instance running Auditbeat using Docker

enabled 1
failure 1
pid 3323
rate_limit 0
backlog_limit 8192
lost 0
backlog 0
backlog_wait_time 15000
loginuid_immutable 0 
unlocked

Here is the output for "auditbeat show auditd-status" from the instance running Auditbeat as systemd:

enabled 2
failure 1
pid 3321
rate_limit 0
backlog_limit 8192
lost 0
backlog 0
backlog_wait_time 15000
features 0x3f

When I tried the following config instead:

audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
audit_rules: |

Where I before had:

audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
  #audit_rules: |

I get a different error, but that also seems to point to that Docker with auditbeat can't co-work with auditd systemd.
"message":"failed to set audit PID. An audit process is already running (PID 3323)"

There might a couple issues based on the different errors you've posted.

The 2 means that the audit rules and configuration have been put into immutable/locked mode. You probably have -e 2 in auditd rules. This will prevent Auditbeat from being able to install the rules defined in the config. That would be when you see Audit rules are locked.

This error is a result of using both auditd and auditbeat. You can use them both if you put Auditbeat into multicast mode and do not define any rules in it's config (since auditd will already be managing the rules). See the socket_type in the docs for a more detailed explanation.

- module: auditd
  socket_type: multicast

Hi Andrew.

Many thanks for helping me with this.
I tried your approach, and I believe auditbeat will run in multicast for me due to the kernel version.
Using the above config for auditdbeat auditd module I get the following output in the docker container logs for the container that run auditbeat:

[root@ip-10-243-224-144 ec2-user]# docker logs f328aead9284 | grep auditd
2021-09-21T07:15:50.729Z	INFO	[auditd]	auditd/audit_linux.go:107	auditd module is running as euid=0 on kernel=4.14.243-185.433.amzn2.x86_64
2021-09-21T07:15:50.729Z	INFO	[auditd]	auditd/audit_linux.go:134	socket_type=multicast will be used.
2021-09-21T07:15:55.430Z	INFO	[auditd]	auditd/audit_linux.go:252	No audit_rules were specified.

So the errors are gone, however - I don't get any auditd logs from auditbeat at all - despite that I see multiple log entries in the audit.log file on the instance itself.
Did I miss some step?

To elaborate, when I run
auditbeat show auditd-rules
Inside the docker container running auditbeat, I see the auditd rules that I added to the container using a volume mount.

When I run:
auditctl -l
On the host I see a different ruleset. Which is the one stored in
etc/audit/rules.d/audit.rules on the host and used by the auditd systemd process on the host itself

When I create a new auditd rule on the host I can see that it's accessible from inside the container too:

[root@ip-10-243-224-144 elk_version_2]# auditctl -w /etc/passwd -p wra -k passwd
[root@ip-10-243-224-144 elk_version_2]# auditctl -l | grep passwd
-w /etc/passwd -p rwa -k passwd
[root@ip-10-243-224-144 elk_version_2]# docker exec -it f328aead9284 /bin/bash
[root@auditbeat auditbeat]# auditbeat show auditd-rules | grep passwd
-w /etc/passwd -p rwa -k passwd

When I however try to run:
cat /etc/passwd
both on the host and inside the container, I get no entry for auditd for auditbeat but the following entries are seen in the /var/log/audit/audit.log on the host:

[root@ip-10-243-224-144 elk_version_2]# cat /var/log/audit/audit.log | grep passwd
type=EXECVE msg=audit(1632211841.440:7988): argc=2 a0="cat" a1="/etc/passwd"
type=EXECVE msg=audit(1632211915.677:7991): argc=3 a0="grep" a1="--color=auto" a2="passwd"
type=EXECVE msg=audit(1632211966.901:7997): argc=3 a0="grep" a1="--color=auto" a2="passwd"
type=EXECVE msg=audit(1632211977.557:7999): argc=7 a0="auditctl" a1="-w" a2="/etc/passwd" a3="-p" a4="wra" a5="-k" a6="passwd"
type=CONFIG_CHANGE msg=audit(1632211977.557:8000): auid=1000 ses=38 op=add_rule key="passwd" list=4 res=1
type=EXECVE msg=audit(1632211980.265:8002): argc=3 a0="grep" a1="--color=auto" a2="passwd"
type=EXECVE msg=audit(1632211987.149:8005): argc=3 a0="grep" a1="--color=auto" a2="passwd"
type=SYSCALL msg=audit(1632211997.517:8008): arch=c000003e syscall=2 success=yes exit=3 a0=7f8efe8e62f9 a1=80000 a2=1b6 a3=80000 items=1 ppid=25745 pid=26583 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=38 comm="docker" exe="/usr/bin/docker" key="passwd"
type=EXECVE msg=audit(1632212000.725:8022): argc=3 a0="grep" a1="--color=auto" a2="passwd"
type=EXECVE msg=audit(1632212130.249:8026): argc=2 a0="cat" a1="/etc/passwd"
type=EXECVE msg=audit(1632212182.717:8032): argc=3 a0="grep" a1="--color=auto" a2="passwd"
type=EXECVE msg=audit(1632212342.506:8035): argc=2 a0="cat" a1="/etc/passwd"
type=SYSCALL msg=audit(1632212342.506:8036): arch=c000003e syscall=257 success=yes exit=3 a0=ffffffffffffff9c a1=7ffe5344975d a2=0 a3=0 items=1 ppid=25745 pid=26914 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=38 comm="cat" exe="/usr/bin/cat" key="passwd"
type=EXECVE msg=audit(1632212360.602:8039): argc=2 a0="cat" a1="/etc/passwd"
type=SYSCALL msg=audit(1632212360.602:8040): arch=c000003e syscall=257 success=yes exit=3 a0=ffffffffffffff9c a1=7ffe1412975d a2=0 a3=0 items=1 ppid=25745 pid=26931 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=38 comm="cat" exe="/usr/bin/cat" key="passwd"
type=EXECVE msg=audit(1632212369.666:8043): argc=3 a0="grep" a1="--color=auto" a2="passwd"

auditd status seen both from the host and container:

Inside container

[root@auditbeat auditbeat]# auditbeat show auditd-status
enabled 1
failure 0
pid 22038
rate_limit 0
backlog_limit 8196
lost 0
backlog 0
backlog_wait_time 0
features 0x3f

On the host

[root@ip-10-243-224-144 elk_version_2]# auditctl -s
enabled 1
failure 0
pid 22038
rate_limit 0
backlog_limit 8196
lost 0
backlog 0
backlog_wait_time 0
loginuid_immutable 0 unlocked

It sounds like you have it setup properly given that it said it was using socket_type=multicast. Can you run Auditbeat with debug logging on and share the log?

logging.level: debug
logging.selectors: [auditd, processors]

Hi Andrew

This is the result when I pull the logs from docker using the container ID:

[root@ip-10-243-224-144 elk_version_2]# docker logs 7f6c494a9540 | grep auditd
2021-09-22T07:59:57.250Z	INFO	[auditd]	auditd/audit_linux.go:107	auditd module is running as euid=0 on kernel=4.14.243-185.433.amzn2.x86_64
2021-09-22T07:59:57.250Z	INFO	[auditd]	auditd/audit_linux.go:134	socket_type=multicast will be used.
2021-09-22T08:00:01.994Z	INFO	[auditd]	auditd/audit_linux.go:252	No audit_rules were specified.
2021-09-22T08:08:31.358Z	DEBUG	[auditd]	auditd/audit_linux.go:472	receiveEvents goroutine exited
2021-09-22T08:09:07.779Z	INFO	[auditd]	auditd/audit_linux.go:107	auditd module is running as euid=0 on kernel=4.14.243-185.433.amzn

When I do the same but filtering on DEBUG I get:

|2021-09-22T08:18:11.796Z|DEBUG|[processors]|processing/processors.go:203|Publish event: {|
|---|---|---|---|---|
|2021-09-22T08:18:11.796Z|DEBUG|[processors]|processing/processors.go:128|Fail to apply processor global{add_host_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]], add_cloud_metadata={cloud:{account:{id:ACCOUNT_ID},availability_zone:eu-west-1a,image:{id:AMI},instance:{INSTANCE_ID},machine:{type:t2.medium},provider:aws,region:eu-west-1,service:{name:EC2}}}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_process_metadata=[match_pids=[process.pid process.ppid process.parent.pid process.parent.ppid], mappings={container.id:container.id,process.args:process.args,process.executable:process.executable,process.name:process.name,process.pid:process.pid,process.ppid:process.ppid,process.start_time:process.start_time,process.title:process.title}, ignore_missing=true, overwrite_fields=false, restricted_fields=false, host_path=/, cgroup_prefixes=[/kubepods /docker]]}: process not found|
|2021-09-22T08:18:11.796Z|DEBUG|[processors]|processing/processors.go:203|Publish event: {|
|2021-09-22T08:18:13.796Z|DEBUG|[processors]|processing/processors.go:128|Fail to apply processor global{add_host_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]], add_cloud_metadata={cloud:{account:{id:ACCOUNT_ID},availability_zone:eu-west-1a,image:{id:AMI},instance:{INSTANCE_ID},machine:{type:t2.medium},provider:aws,region:eu-west-1,service:{name:EC2}}}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_process_metadata=[match_pids=[process.pid process.ppid process.parent.pid process.parent.ppid], mappings={container.id:container.id,process.args:process.args,process.executable:process.executable,process.name:process.name,process.pid:process.pid,process.ppid:process.ppid,process.start_time:process.start_time,process.title:process.title}, ignore_missing=true, overwrite_fields=false, restricted_fields=false, host_path=/, cgroup_prefixes=[/kubepods /docker]]}: process not found|
|2021-09-22T08:18:13.796Z|DEBUG|[processors]|processing/processors.go:203|Publish event: {|
|2021-09-22T08:18:14.796Z|DEBUG|[processors]|processing/processors.go:128|Fail to apply processor global{add_host_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]], add_cloud_metadata={cloud:{account:{id:ACCOUNT_ID},availability_zone:eu-west-1a,image:{id:AMI},instance:{INSTANCE_ID},machine:{type:t2.medium},provider:aws,region:eu-west-1,service:{name:EC2}}}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_process_metadata=[match_pids=[process.pid process.ppid process.parent.pid process.parent.ppid], mappings={container.id:container.id,process.args:process.args,process.executable:process.executable,process.name:process.name,process.pid:process.pid,process.ppid:process.ppid,process.start_time:process.start_time,process.title:process.title}, ignore_missing=true, overwrite_fields=false, restricted_fields=false, host_path=/, cgroup_prefixes=[/kubepods /docker]]}: process not found|
|2021-09-22T08:18:14.796Z|DEBUG|[processors]|processing/processors.go:203|Publish event: {|
|2021-09-22T08:18:14.796Z|DEBUG|[processors]|processing/processors.go:128|Fail to apply processor global{add_host_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]], add_cloud_metadata={cloud:{account:{id:ACCOUNT_ID},availability_zone:eu-west-1a,image:{id:AMI},instance:{INSTANCE_ID},machine:{type:t2.medium},provider:aws,region:eu-west-1,service:{name:EC2}}}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_process_metadata=[match_pids=[process.pid process.ppid process.parent.pid process.parent.ppid], mappings={container.id:container.id,process.args:process.args,process.executable:process.executable,process.name:process.name,process.pid:process.pid,process.ppid:process.ppid,process.start_time:process.start_time,process.title:process.title}, ignore_missing=true, overwrite_fields=false, restricted_fields=false, host_path=/, cgroup_prefixes=[/kubepods /docker]]}: process not found|

I still see no logs for auditd but also no other errors either

Furthermore, when I go into the Docker container running auditbeat and check it's auditd log via:
[root@auditbeat auditbeat]# cat /var/log/audit/audit.log | head

I get:

type=DAEMON_START msg=audit(1632229262.538:9983): op=start ver=2.8.1 format=raw kernel=4.14.243-185.433.amzn2.x86_64 auid=4294967295 pid=32143 uid=0 ses=4294967295 res=success
type=SERVICE_START msg=audit(1632229261.537:9959): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
type=SERVICE_STOP msg=audit(1632229261.537:9960): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
type=CONFIG_CHANGE msg=audit(1632229262.537:9961): audit_enabled=1 old=1 auid=4294967295 ses=4294967295 res=1
type=CONFIG_CHANGE msg=audit(1632229262.537:9962): audit_pid=32143 old=0 auid=4294967295 ses=4294967295 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9963): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=5 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9964): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=5 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9965): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=5 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9966): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=0 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9967): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=4 res=1

Doing the same on the host itself gives:

[root@ip-10-243-224-144 ec2-user]# cat /var/log/audit/audit.log | head
type=DAEMON_START msg=audit(1632229262.538:9983): op=start ver=2.8.1 format=raw kernel=4.14.243-185.433.amzn2.x86_64 auid=4294967295 pid=32143 uid=0 ses=4294967295 res=success
type=SERVICE_START msg=audit(1632229261.537:9959): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
type=SERVICE_STOP msg=audit(1632229261.537:9960): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
type=CONFIG_CHANGE msg=audit(1632229262.537:9961): audit_enabled=1 old=1 auid=4294967295 ses=4294967295 res=1
type=CONFIG_CHANGE msg=audit(1632229262.537:9962): audit_pid=32143 old=0 auid=4294967295 ses=4294967295 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9963): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=5 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9964): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=5 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9965): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=5 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9966): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=0 res=1
type=CONFIG_CHANGE msg=audit(1632229262.549:9967): auid=4294967295 ses=4294967295 op=add_rule key=(null) list=4 res=1

To iterate, current auditbeat.yaml config for docker is:

# =========================== Modules configuration ============================
auditbeat.modules:

- module: auditd
  socket_type: multicast

- module: system
  datasets:
    - host
    - login
    - user
  period: 1m
  state.period: 24h
  user.detect_password_changes: true

- module: system
  datasets:
    - process
    - socket
  period: 1m
## ------------------------------ Kafka Output -------------------------------
output.kafka:
 I_HAVE_REMOVED_MY_KAFKA_CONFIG
# ================================= Processors =================================
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_process_metadata:
    #  match_pids: [system.process.ppid]
    # target: system.process.parent
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.selectors: [auditd, processors]
#logging.level: info
#logging.to_files: true
#logging.files:
  #path: /var/log/auditbeat
  #name: auditbeat
  #keepfiles: 7
  #permissions: 0644

Docker compose file is:

version: "3.8"

# https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-overview.html
# Does not look like Auditd is supported in Alpine linux: https://github.com/linuxkit/linuxkit/issues/52

services:

  auditbeat:
    user: root
    pid: host
    cap_add:
      - AUDIT_CONTROL
      - AUDIT_READ
      - CAP_SYS_ADMIN # This is an insecure option for getting access to /sys and bind mount tracefs debugfs
      - CAP_NET_ADMIN #This is needed in order for auditbeat to monitor using socket
    container_name: auditbeat
    hostname: auditbeat
    restart: always
    image: docker.elastic.co/beats/auditbeat:${ELASTIC_VERSION:-7.14.1}
    volumes:
      - /var/log:/var/log:ro
      - /var/run/docker.sock:/var/run/docker.sock
      - ./auditbeat_v5_docker_elk.yml:/usr/share/auditbeat/auditbeat.yml:ro
      - ./auditd_conf.yaml:/usr/share/auditbeat/audit.rules.d/auditd_conf.yaml:ro
    command: auditbeat -e -strict.perms=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    stdin_open: true
    tty: true
    #network_mode: bridge
    networks:
      - auditbeat
    deploy:
      mode: global
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "50"
networks:
  auditbeat:
    external: true

I was able to reproduce the issue. I suspect that something changed in the kernel or docker to isolate netlink sockets to a namespace because when I added network_mode: host to the docker-compose.yml Auditbeat started receiving auditd data.

Please try sharing the host's network namespace with the container via network_mode: host.

I opened an issue to track this at https://github.com/elastic/beats/issues/28063.

Hi Andrew

This solved the issue!
I can now see auditd logs coming in and also add my own auditd config file to my docker container, doing more auditing that just set by the host.
May I ask how this relates to my problem and also what network modes are applicable? (Bridge etc.)
So that I don't do the same mistake again.

Hi Andrew

Sorry to bother with yet another issue but I came across something interesting again related to permissions.
To summarize the state: I can now run and see logs fine for auditbeat in docker using docker-compose and the above config, using network_mode: host

Now, when I try to package those configs together using a Dockerfile, in order to build my own image with the config files to export to our internal Docker hub the following happens.
I can build the image fine using:

[root@ip-10-243-224-144 auditbeat]# docker build .
[root@ip-10-243-224-144 auditbeat]# docker tag e4adb22ff731 auditbeat_dockerfile:v1
[root@ip-10-243-224-144 auditbeat]# docker images
REPOSITORY                          TAG       IMAGE ID       CREATED          SIZE
auditbeat_dockerfile                v1        e4adb22ff731   32 minutes ago   457MB

My dockerfile is:

[root@ip-10-243-224-144 auditbeat]# cat Dockerfile
FROM docker.elastic.co/beats/auditbeat:7.14.1

COPY auditbeat_v5_docker_elk.yml /usr/share/auditbeat/auditbeat.yml
COPY auditd_conf.yaml /usr/share/auditbeat/audit.rules.d/auditd_conf.yaml

When I try to run a new container using this image I get the following:

[root@ip-10-243-224-144 auditbeat]# docker run e4adb22ff731
2021-09-23T09:32:40.479Z	ERROR	instance/beat.go:989	Exiting: 1 error: failed to create audit client: failed to open audit netlink socket: bind failed: operation not permitted
Exiting: 1 error: failed to create audit client: failed to open audit netlink socket: bind failed: operation not permitted

Is this due me not running the container in the proper network_mode:host or did I do some error when I built the image?

I found the issue, since I had forgot to implement the run-time env for the container.
The following fixed the issue and let me run my own custom image, with the config files packaged:

docker run -d \
--net=host \
--cap-add=AUDIT_CONTROL --cap-add=AUDIT_READ --cap-add=CAP_SYS_ADMIN --cap-add=CAP_NET_ADMIN \
--user=root \
--pid=host \
IMAGE_ID

All is working now and I look forward to see if the issue with network_mode:host will be changed in the future :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.