Data path already locked by another beat ,how to configure path.data in running on kubernetes filebeat-kubernetes.yaml

Hey
i installed filebeat using this link :


right away i got the error :
[root@ip-10-xx-x-x filebeat]# filebeat -e -d "publish"
2021-01-03T09:12:11.438Z        INFO    instance/beat.go:645    Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]2021-01-03T09:12:11.438Z        INFO    instance/beat.go:653    Beat ID: e9b11c0c-5c0f-47a3-b78b-7a56cfe0e17f
2021-01-03T09:12:11.440Z        INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:93     add_cloud_metadata: hosting provider type detected as aws, metadata={"account":{"id":"708419974150"},"availability_zone":"us-east-1b","image":{"id":"ami-05faeb5c6f7686e01"},"instance":{"id":"i-0ee3c30426b243ba8"},"machine":{"type":"m4.2xlarge"},"provider":"aws","region"
:"us-east-1"}2021-01-03T09:12:11.440Z        INFO    instance/beat.go:392    filebeat stopped.
2021-01-03T09:12:11.440Z        ERROR   instance/beat.go:956    Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (pat
h.data).Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
[root@ip-10-xx-x-x filebeat]#

On all filebeat pods (6+)
i saw this response :

can i get example where and to add the path.home in the filebeat-kubernetes.yaml?
to fix this error ?
@shaunak

Hi!

When Filebeat Pod is spinned, Filebeat process will automatically start within the container so I guess that you exed inside the container and try to start it again? This is why you get this error, since another Filebeat instance is already running. Can you just check what Filebeat's pod logs show when you deploy it? It should work out of the box.

C.

no more filebeat process is running

[root@ip-10-101-2-99 filebeat]# ps -ef 
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 13:18 ?        00:00:03 filebeat -c /etc/filebeat.yml -e
root        37     0  0 14:27 pts/0    00:00:00 sh -c clear; (bash || ash || sh)
root        43    37  0 14:27 pts/0    00:00:00 sh -c clear; (bash || ash || sh)
root        44    43  0 14:27 pts/0    00:00:00 bash
root        76    44  0 14:29 pts/0    00:00:00 ps -ef

here is the log :

root@ip-10-101-2-99 filebeat]# filebeat -e -d "*"
2021-01-04T14:27:55.957Z        INFO    instance/beat.go:645    Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2021-01-04T14:27:55.957Z        DEBUG   [beat]  instance/beat.go:697    Beat metadata path: /usr/share/filebeat/data/meta.json
2021-01-04T14:27:55.957Z        INFO    instance/beat.go:653    Beat ID: 4e5a1256-8a41-45aa-bf4b-6f6657c07380
2021-01-04T14:27:55.957Z        DEBUG   [docker]        docker/client.go:48     Docker client will negotiate the API version on the first request.
2021-01-04T14:27:55.957Z        DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:126     add_cloud_metadata: starting to fetch metadata, timeout=3s
2021-01-04T14:27:55.958Z        DEBUG   [add_docker_metadata]   add_docker_metadata/add_docker_metadata.go:87   add_docker_metadata: docker environment not detected: Cannot connect to the Docker daemon at unix:///var/run/doc
ker.sock. Is the docker daemon running?
2021-01-04T14:27:55.958Z        DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:162     add_cloud_metadata: received disposition for digitalocean after 974.838µs. result=[provider:digitalocean, error=failed w
ith http status code 404, metadata={}]
2021-01-04T14:27:55.961Z        DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:162     add_cloud_metadata: received disposition for openstack after 3.315555ms. result=[provider:openstack, error=<nil>, metada
ta={"availability_zone":"us-east-1c","instance":{"id":"i-02dea175984a378da","name":"ip-10-101-2-99.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}]
2021-01-04T14:27:55.961Z        DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:129     add_cloud_metadata: fetchMetadata ran for 3.398374ms
2021-01-04T14:27:55.961Z        INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:93     add_cloud_metadata: hosting provider type detected as openstack, metadata={"availability_zone":"us-east-1c","ins
tance":{"id":"i-02dea175984a378da","name":"ip-10-101-2-99.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}
2021-01-04T14:27:55.961Z        DEBUG   [processors]    processors/processor.go:120     Generated new processors: add_cloud_metadata={"availability_zone":"us-east-1c","instance":{"id":"i-02dea175984a378da","name":"ip-10-101-
2-99.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]]
2021-01-04T14:27:55.961Z        INFO    instance/beat.go:392    filebeat stopped.
2021-01-04T14:27:55.961Z        ERROR   instance/beat.go:956    Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).

all the pods are the same ( 8 pods of filebeats )

what do i miss here ?
what more info do you need ?

Hi!

I see the following:

A filebeat process with pid 1 which is actually the container's process. Then you try to manually start filebeat again for what I see. Do I lose something?

i don't start manually anything at all
i just run :
kubectl create -f filebeat-kubernetes.yaml
and i can see them created and started

Yes but then I see that you run filebeat -e -d "*" again, at lease in the example you shared above. This will run a Filebeat process not just provide the logs. Is this what happens here?

@ChrsMark
Well everything worked fine ( other problems are raised )
After i changed the configuration to this: ( see last comment )

but the error in the log is still the same very storage...
i don't know where it Will bite me in later on...