Error: Using autodiscovery on OpenShift/Kubernetes without granting node permission to SA

Hi,
I want to upgrade from metricbeat 7.10.3 to 7.11+, including the autodiscovery feature on Openshift/Kubernetes, but always get the following error.

E0503 09:16:45.462276 8 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:MYUSER:default" cannot list resource "nodes" in API group "" at the cluster scope

I am aware, that with version 7.11+ autodiscovery requires to update the ClusterRole with get & watch access to nodes, to enrich the events for pods with node's and namespace's metadata. However, this is currently not an option with my client's OpenShift setup.

In my understanding, it is possible to deactivate this feature, implementing "add_resource_metadata" according this comment at Github and the official documentation. I expected that with the flag I don't have to grant the serviceaccount node access. Adding the required parameters does not change anything and I don't get additional information’s from the logs, even if I switch to debug log level.

I already reduced the metricbeat.yml to a bare minimum to track down the error and currently using the following config:

  metricbeat.yml: |-
    metricbeat.autodiscover:
      providers:
        - type: kubernetes
          scope: cluster
          namespace: MYNAMESPACE
          add_resource_metadata:
            node:
              enabled: false
          templates:
            - condition:
                contains:
                  kubernetes.labels.haproxy: metrics
              config:
                - module: prometheus
                  period: 60s
                  hosts: "${data.host}:9090"
                  metricsets: ["collector"]
                  metrics_path: /metrics
                  metrics_filters:
                    include:
                      - haproxy_frontend_current_sessions
                      - haproxy_frontend_max_sessions
                      - haproxy_frontend_limit_sessions
                      - haproxy_frontend_status
                      - haproxy_frontend_connections_rate_max
                      - haproxy_frontend_internal_errors_total
                      - haproxy_backend_status
                      - haproxy_backend_active_servers
                      - haproxy_server_status
    output.elasticsearch:
      hosts: [ "MYHOST" ]
      protocol: "https"
      username: MYUSERNAME
      password: MYPASSWORD
      indices:       
        - index: "MYINDEX"
    setup.ilm:
      enabled: false

Is this a known issue, or is there something wrong with my configuration? Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.