Hey I am setting up an observaiblity use case to test it with docker, and I want to collect Elasticsearch logs (gc, audit, etc.) using Filebeat.
I have Elasticsearch running in a docker container, and I have filebeat running in another container, what configuration I need to collect logs ?
From Collecting Elasticsearch log data with Filebeat it says that I have to install filebeat in the same host or VM where Elasticsearch is running, but I am in a docker context should I build my own Dockerfile that has Elasticsearch and filebeat running in the same container? Can't find any related information from the official documentation, found some webinars but they don't cover the steps.
One more thing to add, the following lines in filebeat.yml:
thank you @Tetiana_Kravchenko for answering back, to be honest I already saw the label with autodiscover but didn't understand how to make it working, here is what I did based on the doc:
I added labels for Elasticsearch service in docker-compose.yml:
Forgot to mention that I am using Docker desktop on windows 10, and I activated logging for filebeat and have the following error: ERROR metrics/metrics.go:297 cgroups data collection disabled: error finding subsystems: cgroups not found or unsupported by os
Sorry for the late reply and thank you for the detailed explanation!
Did you check if logs are actually ingested? You can check Discover in Kibana, you should also change the index pattern to the one where logs are ingested in (for example filebeat-*)
From the first look: I think you are using wrong hints.default_config.paths - /var/log/containers/*-${data.container.id}.log is mainly used for kubernetes environment, could you try /var/lib/docker/containers/*/*.log ?
Thank you a lot @Tetiana_Kravchenko you saved me a lot of time, I did the changes you wrote above and here is the filebeat.yml for people who will see this discussion in future:
But one more last question, in Autodiscover for docker the hints.default_config.paths for docker is the same as Kubernetes it's a typo error I guess, isn't it? Confirm it to me please so I can do a pull request to fix the doc. I guess it is!
I think it is a typo, for docker should be used /var/lib/docker/containers/*/*.log. And if I am not mistaken - /var/lib/docker/containers/*/*.log is a default value, so
One more thing to mention, even with the new configuration, I have logs but they are not recognized as Elasticsearch logs, still I have 'unkown' (see image below). How can I add information, like for ex: instead of unkown I have elasticsearch-gc logs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.