Fleet managed elastic agent: Kubernetes Prometheus Metrics Autodiscover

Moved the discussion from github to here: Prometheus Input - auto discovery and leader election in fleet · Issue #4126 · elastic/elastic-agent · GitHub

I will just continue the discussion from Github here: @gizas Is there any possible way to include the kubernetes provider in the prometheus integration policy in fleet in the collector section?

I am not sure where to configure that provider part to get the kubernetes.* fields.

I have no option to configure that in the UI:

The kubernetes provider is enabled in the background.

To configure it we have this : Advanced Elastic Agent configuration managed by Fleet | Fleet and Elastic Agent Guide [8.14] | Elastic

But in your case you dont need -I guess- to configure anything in the provider, just use a kubernetes variable in the condition of the policy

Thank you for the response. I will try that.

@Andreas_Gkizas I have tried it the following way:

Unfortunately my agnts changed to unhealthy after deploying the policy with the following error:

[elastic_agent][error] applying new policy: could not create the map from the configuration: error unpacking config to MapStr object: missing field accessing 'output_permissions'

Did I configure anything wrong?

This is the API preview:

"prometheus-prometheus/metrics": {
      "enabled": true,
      "streams": {
        "prometheus.collector": {
          "enabled": true,
          "vars": {
            "hosts": [
              "${kubernetes.pod.ip}:${kubernetes.annotations.prometheus.io/port|'8090'}"
            ],
            "metrics_path": "${kubernetes.annotations.prometheus.io/path|'/metrics'}",
            "period": "30s",
            "use_types": true,
            "rate_counters": true,
            "leaderelection": true,
            "condition": "${kubernetes.annotations.prometheus.io/scrape} == \"true\"",
            "ssl.verification_mode": "none",
            "ssl.certificate_authorities": [],
            "metrics_filters.exclude": [],
            "metrics_filters.include": [],
            "headers": "# headers:\n#   Cookie: abcdef=123456\n#   My-Custom-Header: my-custom-value\n",
            "query": "# query:\n#   key: value\n",
            "data_stream.dataset": "prometheus.collector",
            "processors": "- add_fields:\r\n    target: kubernetes\r\n    fields:\r\n      annotations.elastic_co/dataset: ${kubernetes.annotations.elastic.co/dataset|\"\"}\r\n      annotations.elastic_co/namespace: ${kubernetes.annotations.elastic.co/namespace|\"\"}\r\n      annotations.elastic_co/preserve_original_event: ${kubernetes.annotations.elastic.co/preserve_original_event|\"\"}\r\n- drop_fields:\r\n    fields:\r\n      - kubernetes.annotations.elastic_co/dataset\r\n    when:\r\n      equals:\r\n        kubernetes.annotations.elastic_co/dataset: \"\"\r\n    ignore_missing: true\r\n- drop_fields:\r\n    fields:\r\n      - kubernetes.annotations.elastic_co/namespace\r\n    when:\r\n      equals:\r\n        kubernetes.annotations.elastic_co/namespace: \"\"\r\n    ignore_missing: true\r\n- drop_fields:\r\n    fields:\r\n      - kubernetes.annotations.elastic_co/preserve_original_event\r\n    when:\r\n      equals:\r\n        kubernetes.annotations.elastic_co/preserve_original_event: \"\"\r\n    ignore_missing: true\r\n- add_tags:\r\n    tags: [\"preserve_original_event\"]\r\n    when:\r\n      and:\r\n        - has_fields:\r\n            - kubernetes.annotations.elastic_co/preserve_original_event\r\n        - regexp:\r\n            kubernetes.annotations.elastic_co/preserve_original_event: \"^(?i)true$\""
          }
        }

@Andreas_Gkizas Did I configure anything wrong in the above shared policy?