Fleet managed elastic agent: Kubernetes Prometheus Metrics Autodiscover

Moved the discussion from github to here: Prometheus Input - auto discovery and leader election in fleet · Issue #4126 · elastic/elastic-agent · GitHub

I will just continue the discussion from Github here: @gizas Is there any possible way to include the kubernetes provider in the prometheus integration policy in fleet in the collector section?

I am not sure where to configure that provider part to get the kubernetes.* fields.

I have no option to configure that in the UI:

The kubernetes provider is enabled in the background.

To configure it we have this : Advanced Elastic Agent configuration managed by Fleet | Fleet and Elastic Agent Guide [8.14] | Elastic

But in your case you dont need -I guess- to configure anything in the provider, just use a kubernetes variable in the condition of the policy

Thank you for the response. I will try that.

@Andreas_Gkizas I have tried it the following way:

Unfortunately my agnts changed to unhealthy after deploying the policy with the following error:

[elastic_agent][error] applying new policy: could not create the map from the configuration: error unpacking config to MapStr object: missing field accessing 'output_permissions'

Did I configure anything wrong?

This is the API preview:

"prometheus-prometheus/metrics": {
      "enabled": true,
      "streams": {
        "prometheus.collector": {
          "enabled": true,
          "vars": {
            "hosts": [
              "${kubernetes.pod.ip}:${kubernetes.annotations.prometheus.io/port|'8090'}"
            ],
            "metrics_path": "${kubernetes.annotations.prometheus.io/path|'/metrics'}",
            "period": "30s",
            "use_types": true,
            "rate_counters": true,
            "leaderelection": true,
            "condition": "${kubernetes.annotations.prometheus.io/scrape} == \"true\"",
            "ssl.verification_mode": "none",
            "ssl.certificate_authorities": [],
            "metrics_filters.exclude": [],
            "metrics_filters.include": [],
            "headers": "# headers:\n#   Cookie: abcdef=123456\n#   My-Custom-Header: my-custom-value\n",
            "query": "# query:\n#   key: value\n",
            "data_stream.dataset": "prometheus.collector",
            "processors": "- add_fields:\r\n    target: kubernetes\r\n    fields:\r\n      annotations.elastic_co/dataset: ${kubernetes.annotations.elastic.co/dataset|\"\"}\r\n      annotations.elastic_co/namespace: ${kubernetes.annotations.elastic.co/namespace|\"\"}\r\n      annotations.elastic_co/preserve_original_event: ${kubernetes.annotations.elastic.co/preserve_original_event|\"\"}\r\n- drop_fields:\r\n    fields:\r\n      - kubernetes.annotations.elastic_co/dataset\r\n    when:\r\n      equals:\r\n        kubernetes.annotations.elastic_co/dataset: \"\"\r\n    ignore_missing: true\r\n- drop_fields:\r\n    fields:\r\n      - kubernetes.annotations.elastic_co/namespace\r\n    when:\r\n      equals:\r\n        kubernetes.annotations.elastic_co/namespace: \"\"\r\n    ignore_missing: true\r\n- drop_fields:\r\n    fields:\r\n      - kubernetes.annotations.elastic_co/preserve_original_event\r\n    when:\r\n      equals:\r\n        kubernetes.annotations.elastic_co/preserve_original_event: \"\"\r\n    ignore_missing: true\r\n- add_tags:\r\n    tags: [\"preserve_original_event\"]\r\n    when:\r\n      and:\r\n        - has_fields:\r\n            - kubernetes.annotations.elastic_co/preserve_original_event\r\n        - regexp:\r\n            kubernetes.annotations.elastic_co/preserve_original_event: \"^(?i)true$\""
          }
        }

@Andreas_Gkizas Did I configure anything wrong in the above shared policy?

Hello @Alphayeeeet ,

I would say to note here the steps you did. I guess the policy is configured through Fleet right?

[quote="Alphayeeeet, post:5, topic:362808"]
missing field accessing 'output_permissions'
[/quote]: The output permissions is usually needed to be able to write to appropriate indices. Are you sure you have correct privileges assigned to your user or API you try to authenticate with?

Another advise is to try step by step replacing the required fields with variables. I see you use them in hosts, metric_path and condition
I am not quite sure for metric_path. Can you try just for test to put a fix string there?

Either use ./elastic-agent inspect command to see the rendered configuration (refs here and here)

Hi @Andreas_Gkizas,

yes the policy is/was configured through fleet. I will configure it again by time and share the rendered config requested via diagnostics.

Still I would like to know, if there is any documentation on how to configure such condition-based autodiscovery and provider based parameteres in fleet integration policies.

As this should be quite a common usecase for that, such topic should be documented on how to use/configure it.