Unable to configuring HeartBeat autodiscovery with hints in ECK setup

hello!

I'm trying to configure heartbeat in ECK using the autodiscovery configuration with hints, however using the config in ECK does not seem to work. It used to work when I had a self managed elastic setup running in kubernetes without the use of ECK. It currently seems like ECK doesn't accept the autodiscovery setup.

Here are the YAML files i'm using for heartbeat:

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: heartbeat
  namespace: elastic
spec:
  type: heartbeat
  version: 8.4.3
  elasticsearchRef:
    name: elasticsearch
  config:
    heartbeat.autodiscover:
      providers:
        - type: kubernetes
          hints.enabled: true

        # Autodiscover pods
        - type: kubernetes
          resource: pod
          scope: cluster
          node: ${NODE_NAME}
          hints.enabled: true
        - type: kubernetes
          resource: service
          scope: cluster
          node: ${NODE_NAME}
          hints.enabled: true
        - type: kubernetes
          resource: node
          node: ${NODE_NAME}
          scope: cluster
          templates:
            # Example, check SSH port of all cluster nodes
            - condition: ~
              config:
                - hosts:
                    - ${data.host}:22
                  name: ${data.kubernetes.node.name}
                  schedule: "@every 10s"
                  timeout: 5s
                  type: tcp
    processors:
      - add_cloud_metadata: {}
      - add_host_metadata: {}


    # heartbeat.monitors:
    #   - type: tcp
    #     schedule: "@every 5s"
    #     hosts: ["elasticsearch-es-http.default.svc:9200"]
    #   - type: tcp
    #     schedule: "@every 5s"
    #     hosts: ["kibana-kb-http.default.svc:5601"]
  deployment:
    replicas: 1
    podTemplate:
      spec:
        securityContext:
          runAsUser: 0
        containers:
          - name: heartbeat
            resources:
              limits:
                memory: 1536Mi
              requests:
                # for synthetics, 2 full cores is a good starting point for relatively consistent perform of a single concurrent check
                # For lightweight checks as low as 100m is fine
                cpu: 2000m
                # A high value like this is encouraged for browser based monitors.
                # Lightweight checks use substantially less, even 128Mi is fine for those.
                memory: 1536Mi
            env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: heartbeat
  # should be the namespace where heartbeat is running
  namespace: elastic
  labels:
    k8s-app: heartbeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: heartbeat
  namespace: elastic
subjects:
  - kind: ServiceAccount
    name: heartbeat
    namespace: elastic
roleRef:
  kind: Role
  name: heartbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heartbeat
  namespace: elastic
  labels:
    k8s-app: heartbeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: heartbeat-kubeadm-config
  namespace: elastic
subjects:
  - kind: ServiceAccount
    name: heartbeat
    namespace: elastic
roleRef:
  kind: Role
  name: heartbeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: heartbeat-kubeadm-config
  namespace: elastic
  labels:
    k8s-app: heartbeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]

The error message i'm getting inside the heartbeat pod:

│ {"log.level":"error","@timestamp":"2022-10-24T11:11:14.739Z","log.origin":{"file.name":"instance/beat.go","file.line":1056},"message":"Exiting: error in autodiscover provider settings: error setting up kubernetes autodiscover provider: unable to build kube config due to error: invalid configuration: no configura ││ Exiting: error in autodiscover provider settings: error setting up kubernetes autodiscover provider: unable to build kube config due to error: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable   

Hi @diadoom,

Thanks for letting us know about this issue.

Just making sure, are you deploying the same autodiscovery config that you had on your self-managed cluster? It could be that you have different RBAC controls on ECK, you'll need to give heartbeat container access to K8s API.

This section of ECK documentation should help you set it up: Configuration | Elastic Cloud on Kubernetes [2.4] | Elastic.

Basically, you'll need to give heartbeat permission to access pods, nodes and so on.

Hope that helps.

Hi @emilioalvap,

Thank you for your response.
I indeed forgot to add clusterrole and clusterrolebindings.
However when applied the same error remains. It seems a bit like the ECK version of heartbeat won't accept autodiscovery configuration, but I think it should. I could not find anything else on the internet about it however.

Here my updated heartbeat yaml file:

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: heartbeat
  namespace: elastic
spec:
  type: heartbeat
  version: 8.4.3
  elasticsearchRef:
    name: elasticsearch
  config:
    heartbeat.autodiscover:
      providers:
        # Autodiscover pods
        - type: kubernetes
          resource: pod
          scope: cluster
          node: ${NODE_NAME}
          hints.enabled: true
        - type: kubernetes
          resource: service
          scope: cluster
          node: ${NODE_NAME}
          hints.enabled: true
        - type: kubernetes
          resource: node
          node: ${NODE_NAME}
          scope: cluster
          templates:
            # Example, check SSH port of all cluster nodes
            - condition: ~
              config:
                - hosts:
                    - ${data.host}:22
                  name: ${data.kubernetes.node.name}
                  schedule: "@every 10s"
                  timeout: 5s
                  type: tcp
    processors:
      - add_cloud_metadata: {}
      - add_host_metadata: {}
    # heartbeat.monitors:
    #   - type: tcp
    #     schedule: "@every 5s"
    #     hosts: ["elasticsearch-es-http.elastic.svc:9200"]
    #   - type: tcp
    #     schedule: "@every 5s"
    #     hosts: ["kibana-kb-http.elastic.svc:5601"]
  deployment:
    replicas: 1
    podTemplate:
      spec:
        # securityContext:
        #   runAsUser: 0
        serviceAccountName: heartbeat
        hostNetwork: true
        dnsPolicy: ClusterFirstWithHostNet
        securityContext:
          runAsUser: 0
        containers:
          - name: heartbeat
            securityContext:
              runAsUser: 0
              privileged: true

            resources:
              limits:
                memory: 1536Mi
              requests:
                # for synthetics, 2 full cores is a good starting point for relatively consistent perform of a single concurrent check
                # For lightweight checks as low as 100m is fine
                cpu: 2000m
                # A high value like this is encouraged for browser based monitors.
                # Lightweight checks use substantially less, even 128Mi is fine for those.
                memory: 1536Mi
            env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: heartbeat
  labels:
    k8s-app: heartbeat
rules:
  - apiGroups: [""]
    resources:
      - nodes
      - namespaces
      - pods
      - events
      - services
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources:
      - replicasets
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heartbeat
subjects:
  - kind: ServiceAccount
    name: heartbeat
    namespace: elastic
roleRef:
  kind: ClusterRole
  name: heartbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: heartbeat
  # should be the namespace where heartbeat is running
  namespace: elastic
  labels:
    k8s-app: heartbeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: heartbeat
  namespace: elastic
subjects:
  - kind: ServiceAccount
    name: heartbeat
    namespace: elastic
roleRef:
  kind: Role
  name: heartbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heartbeat
  namespace: elastic
  labels:
    k8s-app: heartbeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: heartbeat-kubeadm-config
  namespace: elastic
subjects:
  - kind: ServiceAccount
    name: heartbeat
    namespace: elastic
roleRef:
  kind: Role
  name: heartbeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: heartbeat-kubeadm-config
  namespace: elastic
  labels:
    k8s-app: heartbeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]

elasticsearch yaml:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: elastic
spec:
  version: 8.4.3
  nodeSets:
    - name: master
      count: 1
      config:
        node.store.allow_mmap: false
        xpack.monitoring.collection.enabled: true
        xpack.monitoring.elasticsearch.collection.enabled: false
      podTemplate:
        metadata:
          labels:
            app: elasticsearch
          annotations:
            co.elastic.logs/module: elasticsearch
            co.elastic.metrics/module: elasticsearch
            co.elastic.metrics/xpack.enabled: "true"
            co.elastic.metrics/metricsets: node, node_stats,index,shard
            co.elastic.metrics/hosts: ${data.host}:9200
        spec:
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
                runAsUser: 0
              command: ["sh", "-c", "sysctl -w vm.max_map_count=262144"]
          containers:
            - name: elasticsearch
              resources:
                requests:
                  memory: "10Gi"
                limits:
                  memory: "10Gi"
              # envFrom:
              #   - secretRef:
              #       name: elastic-secret

      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 80Gi
            storageClassName: standard

kibana:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: elastic
  labels:
    app: kibana
spec:
  version: 8.4.3
  count: 1
  config:
    monitoring.ui.container.elasticsearch.enabled: true
    xpack.monitoring.collection.enabled: true
    xpack.fleet.agents.elasticsearch.hosts:
      ["https://elasticsearch-es-http.elastic.svc:9200"]
    xpack.fleet.agents.fleet_server.hosts:
      ["https://fleet-server-agent-http.elastic.svc:8220"]
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
      - name: apm
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        is_default_fleet_server: true
        namespace: elastic
        monitoring_enabled:
          - logs
          - metrics
        unenroll_timeout: 900
        package_policies:
          - name: fleet_server-1
            id: fleet_server-1
            package:
              name: fleet_server
      - name: Elastic Agent on ECK policy
        id: eck-agent
        namespace: elastic
        monitoring_enabled:
          - logs
          - metrics
        unenroll_timeout: 900
        is_default: true
        package_policies:
          - name: system-1
            id: system-1
            package:
              name: system
          - package:
              name: apm
            name: apm-1
            inputs:
              - type: apm
                enabled: true
                vars:
                  - name: host
                    value: 0.0.0.0:8200
  elasticsearchRef:
    name: elasticsearch
  podTemplate:
    metadata:
      labels:
        app: kibana
      annotations:
        co.elastic.logs/module: kibana
        co.elastic.metrics/module: kibana
        co.elastic.metrics/metricsets: stats,status
        co.elastic.metrics/hosts: ${data.host}:5601
        co.elastic.metrics/xpack.enabled: "true"
    spec:
      containers:
        - name: kibana
          resources:
            requests:
              memory: "1Gi"
            limits:
              memory: "2Gi"
          env:
            - name: NODE_OPTIONS
              value: "--max_old_space_size=1024"

full error log of heartbeat:

{"log.level":"info","@timestamp":"2022-10-25T15:19:19.924Z","log.origin":{"file.name":"instance/beat.go","file.line":707},"message":"Home path: [/usr/share/heartbeat] Config path: [/usr/share/heartbeat] Data path: [/usr/share/heartbeat/data] Logs path: [/usr/share/heartbeat/logs]","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.924Z","log.origin":{"file.name":"instance/beat.go","file.line":715},"message":"Beat ID: c34ccb9e-6b7c-469f-bfca-003288fd0d7c","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2022-10-25T15:19:19.927Z","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":81},"message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": dial tcp 169.254.169.254:80: connect: connection refused. No token in the metadata request will be used.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.929Z","log.logger":"seccomp","log.origin":{"file.name":"seccomp/seccomp.go","file.line":124},"message":"Syscall filter successfully installed","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.929Z","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1081},"message":"Beat info","service.name":"heartbeat","system_info":{"beat":{"path":{"config":"/usr/share/heartbeat","data":"/usr/share/heartbeat/data","home":"/usr/share/heartbeat","logs":"/usr/share/heartbeat/logs"},"type":"heartbeat","uuid":"c34ccb9e-6b7c-469f-bfca-003288fd0d7c"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.929Z","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1090},"message":"Build info","service.name":"heartbeat","system_info":{"build":{"commit":"c2f2aba479653563dbaabefe0f86f5579708ec94","libbeat":"8.4.3","time":"2022-09-27T15:30:48.000Z","version":"8.4.3"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.929Z","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1093},"message":"Go runtime info","service.name":"heartbeat","system_info":{"go":{"os":"linux","arch":"amd64","max_procs":16,"version":"go1.17.12"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.929Z","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/add_cloud_metadata.go","file.line":102},"message":"add_cloud_metadata: hosting provider type not detected.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.930Z","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1097},"message":"Host info","service.name":"heartbeat","system_info":{"host":{"architecture":"x86_64","boot_time":"2022-10-25T06:24:30Z","containerized":true,"name":"minikube","ip":["127.0.0.1/8","172.17.0.1/16","192.168.49.2/24"],"kernel_version":"5.10.102.1-microsoft-standard-WSL2","mac":["02:42:9b:58:00:82","9a:9e:23:74:de:b4","22:16:f0:7e:e8:ee","02:42:c0:a8:31:02","3a:9a:2d:42:07:06","b6:91:ef:4a:d0:85","1a:53:7a:9a:f3:e0"],"os":{"type":"linux","family":"debian","platform":"ubuntu","name":"Ubuntu","version":"20.04.5 LTS (Focal Fossa)","major":20,"minor":4,"patch":5,"codename":"focal"},"timezone":"UTC","timezone_offset_sec":0,"id":"273e76aaed334e389acd3cb687b033f8"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.931Z","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1126},"message":"Process info","service.name":"heartbeat","system_info":{"process":{"capabilities":{"inheritable":null,"permitted":["net_raw"],"effective":["net_raw"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null},"cwd":"/usr/share/heartbeat","exe":"/usr/share/heartbeat/heartbeat","name":"heartbeat","pid":7,"ppid":1,"seccomp":{"mode":"filter","no_new_privs":true},"start_time":"2022-10-25T15:19:19.340Z"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.931Z","log.origin":{"file.name":"instance/beat.go","file.line":293},"message":"Setup Beat: heartbeat; Version: 8.4.3","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2022-10-25T15:19:19.937Z","log.logger":"cfgwarn","log.origin":{"file.name":"tlscommon/config.go","file.line":102},"message":"DEPRECATED: Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are present is going to be removed. Please update your certificates if needed. Will be removed in version: 8.0.0","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.938Z","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":108},"message":"elasticsearch url: https://elasticsearch-es-http.elastic.svc:9200","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.938Z","log.logger":"publisher","log.origin":{"file.name":"pipeline/module.go","file.line":113},"message":"Beat name: minikube","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.938Z","log.origin":{"file.name":"scheduler/scheduler.go","file.line":79},"message":"limiting to 2 concurrent jobs for 'browser' type","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.938Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":143},"message":"Starting metrics logging every 30s","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.938Z","log.origin":{"file.name":"instance/beat.go","file.line":470},"message":"heartbeat start running.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.939Z","log.origin":{"file.name":"beater/heartbeat.go","file.line":100},"message":"heartbeat is running! Hit CTRL-C to stop it.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.939Z","log.origin":{"file.name":"beater/heartbeat.go","file.line":102},"message":"Effective user/group ids: %d/%d, with groups: %v0 0 []","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.942Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":193},"message":"Total metrics","service.name":"heartbeat","monitoring":{"metrics":{"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000},"quota":{"us":0}},"id":"/","stats":{"periods":0,"throttled":{"ns":0,"periods":0}}},"cpuacct":{"id":"/","total":{"ns":162623600}},"memory":{"id":"/","mem":{"limit":{"bytes":1610612736},"usage":{"bytes":45027328}}}},"cpu":{"system":{"ticks":50,"time":{"ms":50}},"total":{"ticks":140,"time":{"ms":140},"value":140},"user":{"ticks":90,"time":{"ms":90}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":8},"info":{"ephemeral_id":"d6db96d3-c44d-47d9-9f19-b4192af6ed84","name":"heartbeat","uptime":{"ms":44},"version":"8.4.3"},"memstats":{"gc_next":11872576,"memory_alloc":6712616,"memory_sys":26559496,"memory_total":14777216,"rss":116613120},"runtime":{"goroutines":28}},"heartbeat":{"browser":{"endpoint_starts":0,"endpoint_stops":0,"monitor_starts":0,"monitor_stops":0},"http":{"endpoint_starts":0,"endpoint_stops":0,"monitor_starts":0,"monitor_stops":0},"icmp":{"endpoint_starts":0,"endpoint_stops":0,"monitor_starts":0,"monitor_stops":0},"scheduler":{"jobs":{"active":0,"missed_deadline":0},"tasks":{"active":0,"waiting":0}},"tcp":{"endpoint_starts":0,"endpoint_stops":0,"monitor_starts":0,"monitor_stops":0}},"libbeat":{"config":{"module":{"running":0,"starts":0,"stops":0},"reloads":0,"scans":0},"output":{"events":{"acked":0,"active":0,"batches":0,"dropped":0,"duplicates":0,"failed":0,"toomany":0,"total":0},"read":{"bytes":0,"errors":0},"type":"elasticsearch","write":{"bytes":0,"errors":0}},"pipeline":{"clients":0,"events":{"active":0,"dropped":0,"failed":0,"filtered":0,"published":0,"retry":0,"total":0},"queue":{"acked":0,"max_events":4096}}},"system":{"cpu":{"cores":16},"load":{"1":0.45,"15":1.23,"5":1.04,"norm":{"1":0.0281,"15":0.0769,"5":0.065}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.942Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":194},"message":"Uptime: 46.71202ms","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.942Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":161},"message":"Stopping metrics logging.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-10-25T15:19:19.942Z","log.origin":{"file.name":"instance/beat.go","file.line":475},"message":"heartbeat stopped.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-10-25T15:19:19.942Z","log.origin":{"file.name":"instance/beat.go","file.line":1056},"message":"Exiting: error in autodiscover provider settings: error setting up kubernetes autodiscover provider: unable to build kube config due to error: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable","service.name":"heartbeat","ecs.version":"1.6.0"}
Exiting: error in autodiscover provider settings: error setting up kubernetes autodiscover provider: unable to build kube config due to error: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Hi @diadoom,

I checked locally and the cluster role binding should be OK now. Before we explore a different route, could you try adding automountServiceAccountToken: true at the same level as serviceAccountName: heartbeat? I just want to make sure it's available on the pod.

If you have access through kubectl, you can also check that the secret is being mounted at the correct location:

$ kubectl -n elastic describe pod/heartbeat-75c8dbb864-r2zdz
...    
Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9zknf (ro)
...

The token value should match the one that is listed in the service account secret:

$ kubectl -n elastic describe serviceaccounts/heartbeat
...
Tokens:              heartbeat-token-bpd9k
...

$ kubectl -n elastic get secret/heartbeat-token-bpd9k -o json | jq .data.token 
"ZXlK... // This should be the same as v

$ kubectl -n elastic exec <pod name>  -- cat /var/run/secrets/kubernetes.io/serviceaccount/token | base64 --
ZXlK...

This seems to do the trick, it is working! :smiley:

Thank you for your help and quick response :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.