Elastic Agent fails to back up agent.yml from ConfigMap when using Fleet and Deployment mode on Kubernetes

Hi all,

I'm deploying Elastic Agent in a Kubernetes cluster using Fleet and following the official Elastic documentation for advanced configuration. My use case is different from the common DaemonSet setup — I'm deploying Elastic Agent as a Deployment, and I'm using it as a proxy to ship logs, not as a full node monitor.

Since I'm not running it as a DaemonSet, I wanted to disable the Kubernetes leader election, which is not needed and causes noisy permission errors in logs.

error retrieving resource lock elastic/elastic-agent-cluster-leader: leases.coordination.k8s.io "elastic-agent-clusterleader" is forbidden: User "system:serviceaccount:elastic:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "elastic"

:white_check_mark: What I did

  1. Created a ConfigMap with advanced settings:
apiVersion: v1
kind: ConfigMap
metadata:
  name: agent-node-datastreams
  namespace: elastic
data:
  agent.yml: |-
    fleet.enabled: true
    providers.kubernetes_leaderelection.enabled: false
  1. Mounted this file into the container:
volumeMounts:
  - name: configmap
    mountPath: /etc/elastic-agent/agent.yml
    subPath: agent.yml
    readOnly: true

volumes:
  - name: configmap
    configMap:
      name: agent-node-datastreams
  1. Updated the agent startup args:
args: ["-c", "/etc/elastic-agent/agent.yml", "-e"]

:cross_mark: The Problem

On startup, Elastic Agent throws this error:

failed to store agent config: could not save enrollment information: could not backup /etc/elastic-agent/agent.yaml: rename /etc/elastic-agent/agent.yaml /etc/elastic-agent/agent.yaml.<timestamp>.bak: permission denied

It appears the agent always attempts to back up the agent.yml file, even when passed explicitly with -c, and even when using Fleet. This fails because the file is mounted from a ConfigMap and is therefore read-only.

It would be really great if there is option to set provider params such as

providers.kubernetes_leaderelection.enabled: false

from Elastic cloud fleet management itself.

Please let me know if there is workaround to configure this with agent.

Thank you in advance.