How metricbeat to monitor Filebeat running in Daemonset on K8s

Hi all,

My filebeat is in Daemonset on K8s cluster.

In 7.17 internal monitoring of beats is removed and requires a separate metricbeat for achieving monitoring.
So I'm setting up metricbeat as Daemonset to monitor my filebeat.

Problem:

  1. My metricbeat does not know my filebeat's endpoint. Creating a K8s Service for my filebeat daemonset only reach a random filebeat pod (no way to reach specific node). But I need to monitor all filebeat running on every node.

  2. Correct me if I am wrong. Autodiscover requires privileged mode of SCC (i.e. higher permission) which is not allowed to grant in my organization.

  3. I explored creating a headless service for my filebeat and returns a list of IPs. However I checked some articles seems that metricbeat does not monitor multiple IPs returned from an DNS.

The same, I want to use metricbeat to monitor my logstash instances running as Deployment on K8s as well.

Thanks.

Hi @cmy214 Welcome to the community!

I think you're perhaps a bit confused. You don't need metricbeat To monitor filebeat.

What was removed was an old legacy collection method. There is still internal collection which we use all the time to ship filebeat metrics to Elasticsearch

Just point the internal collection to to the cluster where you want the metrics to show up.

In fact, if you just add these two lines, it will send the metrics to the same cluster that your Elasticsearch output is set to.

monitoring:
  enabled: true

Perhaps I'm confused but we use this today and other kubernetes environment.

hi @stephenb, thanks for your information, I understand now.

However, metricbeat is still a better option because

  1. More comprehensive data can be collected by metricbeat
  2. I understand that the roadmap of internal monitoring will shift to metricbeat
    [Stack Monitoring] Remove internal collectors · Issue #11169 · elastic/logstash · GitHub

Could you / anyone assist me, please?

The GitHub issue you reference is not related to filebeat and metrics collection. It's an old logstash issue.

Can you show me where it says more comprehensive metricset for filebeat? As far as I know, it's the exact same same metricset whether you internal collection or metricbeat.

Yes if you want to collect container metrics as well as filebeat metrics absolutely you should use metricbeat.

If you want to deploy metric beat in kubernetes, I would start with this.

I would get metric beat all running for the normal system and container metrics.

Then you'll need to do all the steps under the metricbest collection.

Again, I have a number of customers running large kubernetes clusters with filebeat
Internal collection and it works great.

What most people do is deploy metricbeat to collect all the system and kubernetes and container metrics etc.

Then for all the beats including even metricbeat, they use the internal collection to understand the health of their beats. That's the normal pattern that I see deployed in production.

Hi stephenb,

Yes I would like to collect container metrics as well.
So I want to use metricbeat.

Following your doc, it requires higher privilege below in OpenShift which is the exact same issue that I am encountering (my problem pt. #2).
My organzation does not allow.

securityContext:
runAsUser: 0
privileged: true

Regards,
Tom

Hi @cmy214

I am not an openshift expert but I know there has been some discussion about it running metricbeat in restricted environments but I do not know much about it.

Here is the issue I believe.