Metricbeat on Kubernetes not reporting correct hostname

I have deployed metricbeat into a k8s cluster using the elastic helm chart however in kibana inventory the hostname is showing as the pod name and not the actual hostname for the node. The only change to the default config file was to output to elastic cloud. The config file is:

    system:
      hostfs: /hostfs
    metricbeat.modules:
    - module: kubernetes
      metricsets:
        - container
        - node
        - pod
        - system
        - volume
      period: 10s
      host: "${NODE_NAME}"
      hosts: ["${NODE_NAME}:10255"]
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    - module: kubernetes
      enabled: true
      metricsets:
        - event
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5
        by_memory: 5
    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

I have verified inside the pods the NODE_NAME env variable is set and correct. I have tried adding just add_metadata: true instead of processors which came from configuration here. I have also looked through this previous post but it did not resolve the issue.

Hi @Ronin,

unfortunately I don't have an authoritative answer to how variables expand at config, but could you try removing the host element from that config, forcing beats to guess the node name?

@pmercado Just tried that and still showing as the pod name.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.