Autodiscovery setting in Metricbeat

I was going through this nice and comprehensive presentation by @pebrc

At the point in time where the link points to, the presenter makes a configuration change to the Elasticsearch resource definition so that the label scrape: es is added, given that there is a corresponding spec.config.metricbeat.autodiscover.providers[0].templates[0].condition in the Beat resource.

However I don't see such an entry here.

Do I have to manually add it or is everything by default scraped in 7.12.0 since perhaps there were changes introduced since 7.10.0 and operator 1.2 (versions of the presentation).

I am using ECK 1.5 and ES 7.12.0.

This example is very close to what was presented in the video you linked. You still need to add the labels in order to set up the Elastic Stack specific monitoring modules.

You can however scrape generic metrics without adding additional metadata and conditions as is shown in the example you linked to.

Thank you. So to get this straight, if I don't add this to the Beat definition

              - condition:
                    kubernetes.labels.scrape: es

everything or nothing will be scraped?

(i.e. in case no condition at all is present in Beat)

Without condition, every pods will be scraped. I don't know what do you want to achieve but no condition is often a conceptual mistake. It might have some sense with the unique flag that autodiscover now offers, but not otherwise.

@pkaramol , let me add something else to the previous comments by Peter and Thibault.

What you showed here is a proposal for a Metricbeat DaemonSet in charge of doing monitoring of each kubernetes host (with an extra autodiscover section with hints enabled that in my view should only be added if it's really wanted ([1])).

What Peter showed is an example of a Metricbeat Deployment configured to perform monitoring of Elasticsearch pods. In order to scrape specific pods with the specified module the condition is needed, as otherwise you might end up scraping all kubernetes pods even if they are not elasticsearch pods (hence the module elasticsearch configured in the input would fail).

I would strongly recommend to take a deep look at all the manifest and docs I share at this repo, as I try to explain and add comments to both of the examples showed here (among other use cases).

  • Documented example of a similar manifest for Elasticsearch and Kibana monitoring here

  • Documented example for Kubernetes hosts monitoring here (in this example I explain what I mentioned above [1]).

It's important to understand every possibility of autodiscover and its implications, because thank to the flexibility it offers we might end up in unwanted configuration or duplicating our metrics retrieval without need.

For example, taking a look at both examples, you could build a single DaemonSet to perform everything (Kubernetes monitoring at host level + scraping elasticsearch pods based on some conditions), but again, the config should be different because Autodiscover when running in a DaemonSet makes sense at node level and Autodiscover when running in a Deployment usually makes sense at cluster level.

I hope this helps a bit, take a look also at the official autodiscover docs, because there are a lot of ways to achieve the same.

many thanks for this elaborate answer @eedugon

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.