Containerized Fleet Server not maintaining persistence

I'm running elastic-agent-complete:9.0.2 with podman, mounting a /data directory to /usr/share/elastic-agent/data within the container. However, every time the container is restarted, it registers a new fleet server agent. As closely as I can determine, this is a permissions issue in the container. From what I've read, you have to run the container as root (--user 0 in the podman run command); if you run as user 1000, then agent status reports agent: healthy but fleet: failed. But the first thing that happens on container start is that it chowns the paths to root:root, then drops privileges to user 1000, making the entire elastic-agent directory un-writable by elastic agent. This prevents the agent from writing its own state so that it can maintain persistence across container restarts/updates. I'm at my wits end trying to figure this out.

Are you using a compose file? How it looks like?

I have fleet working in container without any issue, I have something like this in my compose:

      volumes:
        - fleet:/usr/share/elastic-agent/state
      ports:
        - ${FLEET_PORT}:8220
      user: root

You need to persiste the state, path not the one you shared.

No compose. Just using podman run for now until I get it sorted, then will make a systemd config for it. I was originally only mounting the state path, but it never populated with any data, so agent state never persisted across restarts.