In 7.17 internal monitoring of beats is removed and requires a separate metricbeat for achieving monitoring.
So I'm setting up metricbeat as Daemonset to monitor my filebeat.
Problem:
My metricbeat does not know my filebeat's endpoint. Creating a K8s Service for my filebeat daemonset only reach a random filebeat pod (no way to reach specific node). But I need to monitor all filebeat running on every node.
Correct me if I am wrong. Autodiscover requires privileged mode of SCC (i.e. higher permission) which is not allowed to grant in my organization.
I explored creating a headless service for my filebeat and returns a list of IPs. However I checked some articles seems that metricbeat does not monitor multiple IPs returned from an DNS.
The same, I want to use metricbeat to monitor my logstash instances running as Deployment on K8s as well.
I think you're perhaps a bit confused. You don't need metricbeat To monitor filebeat.
What was removed was an old legacy collection method. There is still internal collection which we use all the time to ship filebeat metrics to Elasticsearch
Just point the internal collection to to the cluster where you want the metrics to show up.
In fact, if you just add these two lines, it will send the metrics to the same cluster that your Elasticsearch output is set to.
monitoring:
enabled: true
Perhaps I'm confused but we use this today and other kubernetes environment.
The GitHub issue you reference is not related to filebeat and metrics collection. It's an old logstash issue.
Can you show me where it says more comprehensive metricset for filebeat? As far as I know, it's the exact same same metricset whether you internal collection or metricbeat.
Yes if you want to collect container metrics as well as filebeat metrics absolutely you should use metricbeat.
If you want to deploy metric beat in kubernetes, I would start with this.
I would get metric beat all running for the normal system and container metrics.
Then you'll need to do all the steps under the metricbest collection.
Again, I have a number of customers running large kubernetes clusters with filebeat
Internal collection and it works great.
What most people do is deploy metricbeat to collect all the system and kubernetes and container metrics etc.
Then for all the beats including even metricbeat, they use the internal collection to understand the health of their beats. That's the normal pattern that I see deployed in production.
Yes I would like to collect container metrics as well.
So I want to use metricbeat.
Following your doc, it requires higher privilege below in OpenShift which is the exact same issue that I am encountering (my problem pt. #2).
My organzation does not allow.
I am not an openshift expert but I know there has been some discussion about it running metricbeat in restricted environments but I do not know much about it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.