I am encountering a persistent issue with Filebeat in my Kubernetes cluster. I have deployed Filebeat as a DaemonSet across nodes, but all Filebeat pods fail to start with the following error:
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
Each Filebeat pod runs on a separate node, so there should be no conflict. However, the error suggests that multiple beats are trying to access the same path.data. I have already ensured that the path.data is configured uniquely per pod using:
Despite this, the issue persists. I would appreciate any insights or recommendations on resolving this conflict while running Filebeat as a DaemonSet in Kubernetes.
The two manifests you've provided are materially different in several ways and I do not see that you've customized the data path as indicated in the original post?
If you're doing a host mount for the Filebeat data you'll need to make sure you aren't running any other Filebeat daemonset or deployments also using a host mount. Similarly if you have deployed another daemonset in a different namespace you'll also have issues. That is if you're not customizing the data path
Can you confirm which daemonset is the current one and can you share the full log that is printing when you start Filebeat?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.