I'm still pretty new to elastic stack. And I'm come from prometheus and grafana, and I'd like to substitute them for metricbeat, filebeat and kibana.
As a newbie, I always wonder, what is the best practice to deploy filebeat and metricbeat. Should I deploy just one for each for the whole docker swarm cluster? One for each docker swarm node? or One for each service?
If there is a video about this, please kindly let me now.
Thank you all.
You only need one Filebeat/Metricbeat per host. If running Metricbeat as a container, make sure to mount all necessary filesystems, here there is a guide: Run Metricbeat on Docker | Metricbeat Reference [8.1] | Elastic.
Another option is to use the Elastic-Agent so you can focus on the data you want to collect/services you want to monitor and the Elastic-Agent takes care of managing the Beats for you.
I think if I deploy the same metricbeat on every node, there will be duplicate documents in Elasticsearch.
Let's say, I have 1 manager and 1 worker node, both of them have metricbeat with the following config:
- type: docker
and there is a service run in global mode, then metricbeat on node1 and node2 will collect data for this service on node1 and node2.
I haven't tested this yet. (I'm having problem making metricbeat autodiscover work), but I guess there would be duplicate documents.
If you connect to a docker running on another node, then you'll probably have duplicated events because multiple instances of Metricbeat will be able to discover the same containers.
The example in our documentation only connects to the docker running on the same node Metricbeat is running via Unix socket.
If the docker you're connecting to via TCP can access all containers, than you'll need only one Metricbeat.
I the autodiscover itself not working or the communication with the docker engine API?
I have migrated to elastic-agent now. I remembered that, from the log, metricbeat can discover my service, but it didn't try to get metric data from it.
But it doesn't matter now.
Thank you for your help.