Filebeat DaemonSet Fails with "Data Path Already Locked" Error

I am encountering a persistent issue with Filebeat in my Kubernetes cluster. I have deployed Filebeat as a DaemonSet across nodes, but all Filebeat pods fail to start with the following error:

Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).

Each Filebeat pod runs on a separate node, so there should be no conflict. However, the error suggests that multiple beats are trying to access the same path.data. I have already ensured that the path.data is configured uniquely per pod using:

filebeat:
  path:
    data: "/var/lib/filebeat-${HOSTNAME}"

Despite this, the issue persists. I would appreciate any insights or recommendations on resolving this conflict while running Filebeat as a DaemonSet in Kubernetes.

Looking forward to your guidance.

can you share your daemonset manifest?

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: logging
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
        - name: filebeat
          image: docker.elastic.co/beats/filebeat:7.17.15
          imagePullPolicy: Always
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          args: ["-c", "/etc/filebeat.yml", "-e"]
          securityContext:
            runAsUser: 0
            runAsGroup: 0
          volumeMounts:
            - mountPath: /etc/filebeat.yml
              name: filebeat-config
              subPath: filebeat.yml
            - mountPath: /var/log/containers
              name: varlog
            - mountPath: /var/lib/docker/containers
              name: varlibdockercontainers
              readOnly: true
            - name: data
              mountPath: /var/lib/filebeat
      volumes:
        - name: filebeat-config
          configMap:
            name: filebeat-config
        - name: varlog
          hostPath:
            path: /var/log/containers
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: data
          hostPath:
            path: /var/lib/filebeat
            type: DirectoryOrCreate

//this is my daemoset manifest

Sorry -- can you also share the full filebeat.yml you're using?

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: logging
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      hostNetwork: false
      securityContext:
        runAsUser: 0
        privileged: false
      containers:
        - name: filebeat
          image: docker.elastic.co/beats/filebeat:7.17.15
          imagePullPolicy: Always 
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          args: ["-c",  "/usr/share/filebeat/filebeat.yml", "-e"]
          volumeMounts:
             - name: config
               mountPath: /usr/share/filebeat/filebeat.yml
               subPath: filebeat.yml
               readOnly: true
             - name: varlogcontainers
               mountPath: /var/log/containers
               readOnly: true
             - name: dockercontainers
               mountPath: /var/lib/docker/containers
               readOnly: true

          readinessProbe:
            exec:
              command:
                - sh
                - -c
                - filebeat test output
            failureThreshold: 3
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 5
      volumes:
        - name: config
          configMap:
            name: filebeat
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: dockercontainers
          hostPath:
            path: /var/lib/docker/containers

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat
  namespace: logging
  labels:
    app: filebeat
data:
  filebeat.yml: |
    filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        
        multiline.pattern: '^\['
        multiline.negate: true
        multiline.match: after

        processors:
          - add_kubernetes_metadata:
              host: ${NODE_NAME}
              matchers:
                - logs_path:
                    logs_path: "/var/log/containers/"

    output.logstash:
      hosts: ["host_IP:5044"] #ELK setup in another machine not in k8s cluster
      loadbalance: true
      ssl.enabled: false

apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: logging
  labels:
    app: filebeat

along with it i am using cluster role and role binding yml

oh sorry strawgate elastic community flagged my code

The two manifests you've provided are materially different in several ways and I do not see that you've customized the data path as indicated in the original post?

If you're doing a host mount for the Filebeat data you'll need to make sure you aren't running any other Filebeat daemonset or deployments also using a host mount. Similarly if you have deployed another daemonset in a different namespace you'll also have issues. That is if you're not customizing the data path

Can you confirm which daemonset is the current one and can you share the full log that is printing when you start Filebeat?