Error in beats elastic agent container: open /hostfs/sys/fs/cgroup/io.pressure: no such file or directory

On running an elastic agent container (docker.elastic.co/beats/elastic-agent:8.0.1) as a daemon set (configuration here - https://raw.githubusercontent.com/elastic/beats/8.0/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml) in my kubernetes cluster (both via docker for mac and on k8s in digital ocean), my agent via fleet is reporting consistently that it can't find /hostfs/sys/fs/cgroup/io.pressure

Exec ing into the container I can see that this file indeed is missing, although that directory is successfully mounted in the spec and other many other files present:

ls /hostfs/sys/fs/cgroup
000-dhcpcd  004-extend     008-services1     allowlist           cgroup.max.descendants  cgroup.threads         devenv-service  http-proxy  memory.stat   systemreserved
001-sysfs   005-mount      009-swap          binfmt              cgroup.procs            cpu.stat               dhcpcd          io.stat     podruntime    volume-contents
002-sysctl  006-metadata   010-mount-docker  cgroup.controllers  cgroup.stat             cpuset.cpus.effective  dns-forwarder   kmsg        procd-paused
003-format  007-services0  011-bridge        cgroup.max.depth    cgroup.subtree_control  cpuset.mems.effective  docker          kubepods    restricted

Configuring the elastic agent certainly isn't easy, having experimented with adjusting /usr/share/elastic-agent/elastic-agent.yml (and then restarting the container) is seems unclear exactly how to affect the behaviour and the details of that yaml file versus the various elements of config that exist in the fleet config console make the fine-tuning process of getting logs from my containers into elastic really quite difficult, especially when I'd really like to keep as much config as possible in source control (I'd also really like to be able to say "try and parse the logs as json, if not, that's fine too" as some processes export LDJSON, others don't).

Any ideas would be appreciated, thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.