[root@ip-10-xx-x-x filebeat]# filebeat -e -d "publish"
2021-01-03T09:12:11.438Z INFO instance/beat.go:645 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]2021-01-03T09:12:11.438Z INFO instance/beat.go:653 Beat ID: e9b11c0c-5c0f-47a3-b78b-7a56cfe0e17f
2021-01-03T09:12:11.440Z INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:93 add_cloud_metadata: hosting provider type detected as aws, metadata={"account":{"id":"708419974150"},"availability_zone":"us-east-1b","image":{"id":"ami-05faeb5c6f7686e01"},"instance":{"id":"i-0ee3c30426b243ba8"},"machine":{"type":"m4.2xlarge"},"provider":"aws","region"
:"us-east-1"}2021-01-03T09:12:11.440Z INFO instance/beat.go:392 filebeat stopped.
2021-01-03T09:12:11.440Z ERROR instance/beat.go:956 Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (pat
h.data).Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
[root@ip-10-xx-x-x filebeat]#
On all filebeat pods (6+)
i saw this response :
can i get example where and to add the path.home in the filebeat-kubernetes.yaml?
to fix this error ? @shaunak
When Filebeat Pod is spinned, Filebeat process will automatically start within the container so I guess that you exed inside the container and try to start it again? This is why you get this error, since another Filebeat instance is already running. Can you just check what Filebeat's pod logs show when you deploy it? It should work out of the box.
root@ip-10-101-2-99 filebeat]# filebeat -e -d "*"
2021-01-04T14:27:55.957Z INFO instance/beat.go:645 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2021-01-04T14:27:55.957Z DEBUG [beat] instance/beat.go:697 Beat metadata path: /usr/share/filebeat/data/meta.json
2021-01-04T14:27:55.957Z INFO instance/beat.go:653 Beat ID: 4e5a1256-8a41-45aa-bf4b-6f6657c07380
2021-01-04T14:27:55.957Z DEBUG [docker] docker/client.go:48 Docker client will negotiate the API version on the first request.
2021-01-04T14:27:55.957Z DEBUG [add_cloud_metadata] add_cloud_metadata/providers.go:126 add_cloud_metadata: starting to fetch metadata, timeout=3s
2021-01-04T14:27:55.958Z DEBUG [add_docker_metadata] add_docker_metadata/add_docker_metadata.go:87 add_docker_metadata: docker environment not detected: Cannot connect to the Docker daemon at unix:///var/run/doc
ker.sock. Is the docker daemon running?
2021-01-04T14:27:55.958Z DEBUG [add_cloud_metadata] add_cloud_metadata/providers.go:162 add_cloud_metadata: received disposition for digitalocean after 974.838µs. result=[provider:digitalocean, error=failed w
ith http status code 404, metadata={}]
2021-01-04T14:27:55.961Z DEBUG [add_cloud_metadata] add_cloud_metadata/providers.go:162 add_cloud_metadata: received disposition for openstack after 3.315555ms. result=[provider:openstack, error=<nil>, metada
ta={"availability_zone":"us-east-1c","instance":{"id":"i-02dea175984a378da","name":"ip-10-101-2-99.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}]
2021-01-04T14:27:55.961Z DEBUG [add_cloud_metadata] add_cloud_metadata/providers.go:129 add_cloud_metadata: fetchMetadata ran for 3.398374ms
2021-01-04T14:27:55.961Z INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:93 add_cloud_metadata: hosting provider type detected as openstack, metadata={"availability_zone":"us-east-1c","ins
tance":{"id":"i-02dea175984a378da","name":"ip-10-101-2-99.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}
2021-01-04T14:27:55.961Z DEBUG [processors] processors/processor.go:120 Generated new processors: add_cloud_metadata={"availability_zone":"us-east-1c","instance":{"id":"i-02dea175984a378da","name":"ip-10-101-
2-99.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]]
2021-01-04T14:27:55.961Z INFO instance/beat.go:392 filebeat stopped.
2021-01-04T14:27:55.961Z ERROR instance/beat.go:956 Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
all the pods are the same ( 8 pods of filebeats )
what do i miss here ?
what more info do you need ?
A filebeat process with pid 1 which is actually the container's process. Then you try to manually start filebeat again for what I see. Do I lose something?
Yes but then I see that you run filebeat -e -d "*" again, at lease in the example you shared above. This will run a Filebeat process not just provide the logs. Is this what happens here?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.