Hi everybody, i have install the elasticseach, kibana, filebeat and metricbeat on a aks cluster using helm.
But for the metricbeat who collect the state metric, on the field "beat.hostname" the value is the pod name and create some "false" stat in dashboard and on infrastructure view.
I saw that some people got this issue too --> here
So i try to add the env variable to "force" the hostname but it doesn't work.
I use this helm chart https://github.com/helm/charts/tree/master/stable/metricbeat
this is my value file for the metricbeat chart :
image:
tag: 6.5.4
# The instances created by daemonset retrieve most metrics from the host
daemonset:
podAnnotations: []
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
nodeSelector: {}
config:
metricbeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
output.file:
enabled: false
output.elasticsearch:
hosts: ["elasticsearch-client"]
setup.dashboards.enabled: true
setup.kibana:
host: "kibana:443"
modules:
system:
enabled: true
config:
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
- core
- diskio
- socket
- raid # Raid
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
- add_kubernetes_metadata:
in_cluster: true
kubernetes:
enabled: true
config:
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
- event
- apiserver
period: 10s
host: ${NODE_NAME}
hosts: ["localhost:10255"]
processors:
- add_kubernetes_metadata:
in_cluster: true
docker:
enabled: true
config:
- module: docker
metricsets:
- container
- cpu
- diskio
- healthcheck
- info
- memory
- network
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
processors:
- add_kubernetes_metadata:
in_cluster: true
# The instance created by deployment retrieves metrics that are unique for the whole cluster, like Kubernetes events or kube-state-metrics
deployment:
podAnnotations: []
tolerations: []
nodeSelector: {}
config:
metricbeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.file:
enabled: false
output.elasticsearch:
hosts: ["elasticsearch-client"]
modules:
kubernetes:
enabled: true
config:
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
hosts: ["kube-state-metrics:8080"]
extraEnv:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
The issue is for the pod deployed by the deployment, the other deployed by the daemon set work very well
I see in my container that the env variable NODE_NAME is well configured but my HOSTNAME is the pod name.
I add some capture to visualize the problem.
Thanks for your help