Hello everyone,
I’ve deployed Elastic Agent in standalone mode on my Kubernetes cluster (Elastic Stack version 8.14.3). While most of the data is being collected as expected, I’ve noticed that some important Kubernetes metrics, such as kubernetes.pod.cpu.usage.limit.pct
, kubernetes.pod.memory.usage.node.pct
, and others, are not being ingested.
Here’s some context about my setup:
- Kube-State-Metrics: Installed and working.
- Elastic Agent Configuration: Below is the redacted
agent.yml
and deployment YAML I used.
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-node-datastreams
namespace: kube-system
labels:
k8s-app: elastic-agent
data:
agent.yml: |-
id: e4a3ab5a-29d1-4360-b935-91c2d30f22e7
outputs:
default:
type: elasticsearch
hosts:
- 'https://myhostr:443'
username: ''
password: ''
preset: balanced
inputs:
- id: >-
kubernetes/metrics-kube-state-metrics-ed9fd52e-8a93-4b4f-a402-d5884b73d724
revision: 1
name: vs1-k8s-prd02
type: kubernetes/metrics
data_stream:
namespace: mynamespace
use_output: default
package_policy_id: ed9fd52e-8a93-4b4f-a402-d5884b73d724
streams:
- id: >-
kubernetes/metrics-kubernetes.state_container-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_container
metricsets:
- state_container
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_cronjob-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_cronjob
metricsets:
- state_cronjob
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_daemonset-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_daemonset
metricsets:
- state_daemonset
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_deployment-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_deployment
metricsets:
- state_deployment
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_job-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_job
metricsets:
- state_job
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_namespace-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_namespace
metricsets:
- state_namespace
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_node-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_node
metricsets:
- state_node
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_persistentvolume-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_persistentvolume
metricsets:
- state_persistentvolume
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_persistentvolumeclaim-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_persistentvolumeclaim
metricsets:
- state_persistentvolumeclaim
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_pod-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_pod
metricsets:
- state_pod
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_replicaset-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_replicaset
metricsets:
- state_replicaset
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_resourcequota-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_resourcequota
metricsets:
- state_resourcequota
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_service-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_service
metricsets:
- state_service
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_statefulset-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_statefulset
metricsets:
- state_statefulset
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_storageclass-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: metrics
dataset: kubernetes.state_storageclass
metricsets:
- state_storageclass
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 2m
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
meta:
package:
name: kubernetes
version: 1.62.2
- id: filestream-container-logs-ed9fd52e-8a93-4b4f-a402-d5884b73d724
revision: 1
name: vs1-k8s-prd02
type: filestream
data_stream:
namespace: mynamespace
use_output: default
package_policy_id: ed9fd52e-8a93-4b4f-a402-d5884b73d724
streams:
- id: >-
kubernetes-container-logs-${kubernetes.pod.name}-${kubernetes.container.id}
data_stream:
dataset: kubernetes.container_logs
paths:
- '/var/log/containers/*${kubernetes.container.id}.log'
prospector.scanner.symlinks: true
parsers:
- container:
stream: all
format: auto
processors:
- add_fields:
target: kubernetes
fields:
annotations.elastic_co/dataset: '${kubernetes.annotations.elastic.co/dataset|""}'
annotations.elastic_co/namespace: '${kubernetes.annotations.elastic.co/namespace|""}'
annotations.elastic_co/preserve_original_event: >-
${kubernetes.annotations.elastic.co/preserve_original_event|""}
- drop_fields:
fields:
- kubernetes.annotations.elastic_co/dataset
when:
equals:
kubernetes.annotations.elastic_co/dataset: ''
ignore_missing: true
- drop_fields:
fields:
- kubernetes.annotations.elastic_co/namespace
when:
equals:
kubernetes.annotations.elastic_co/namespace: ''
ignore_missing: true
- drop_fields:
fields:
- kubernetes.annotations.elastic_co/preserve_original_event
when:
equals:
kubernetes.annotations.elastic_co/preserve_original_event: ''
ignore_missing: true
- add_tags:
tags:
- preserve_original_event
when:
and:
- has_fields:
- >-
kubernetes.annotations.elastic_co/preserve_original_event
- regexp:
kubernetes.annotations.elastic_co/preserve_original_event: ^(?i)true$
meta:
package:
name: kubernetes
version: 1.62.2
- id: filestream-audit-logs-ed9fd52e-8a93-4b4f-a402-d5884b73d724
revision: 1
name: vs1-k8s-prd02
type: filestream
data_stream:
namespace: mynamespace
use_output: default
package_policy_id: ed9fd52e-8a93-4b4f-a402-d5884b73d724
streams:
- id: >-
filestream-kubernetes.audit_logs-ed9fd52e-8a93-4b4f-a402-d5884b73d724
data_stream:
type: logs
dataset: kubernetes.audit_logs
paths:
- /var/log/kubernetes/kube-apiserver-audit.log
exclude_files:
- .gz$
parsers:
- ndjson:
add_error_key: true
target: kubernetes.audit
meta:
package:
name: kubernetes
version: 1.62.2
secret_references: []
revision: 2
agent:
download:
sourceURI: 'https://artifacts.elastic.co/downloads/'
monitoring:
namespace: mynamespace
use_output: default
enabled: true
logs: true
metrics: true
features: {}
protection:
enabled: false
uninstall_token_hash: Bjq8hTW3czB8Iy6BwbKdAvNMLDksbhlpSPJRz866sd8=
signing_key: >-
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAErUMcqwp2WIAEKI39du/tsg1nlDCmaylmp2QBtVKGn7xZZno1ZMYvpujdrAZyWebcxLvIRUJ2llAs62TSfKve/g==
signed:
data: >-
eyJpZCI6ImU0YTNhYjVhLTI5ZDEtNDM2MC1iOTM1LTkxYzJkMzBmMjJlNyIsImFnZW50Ijp7ImZlYXR1cmVzIjp7fSwicHJvdGVjdGlvbiI6eyJlbmFibGVkIjpmYWxzZSwidW5pbnN0YWxsX3Rva2VuX2hhc2giOiJCanE4aFRXM2N6QjhJeTZCd2JLZEF2Tk1MRGtzYmhscFNQSlJ6ODY2c2Q4PSIsInNpZ25pbmdfa2V5IjoiTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFclVNY3F3cDJXSUFFS0kzOWR1L3RzZzFubERDbWF5bG1wMlFCdFZLR243eFpabm8xWk1ZdnB1amRyQVp5V2ViY3hMdklSVUoybGxBczYyVFNmS3ZlL2c9PSJ9fSwiaW5wdXRzIjpbeyJpZCI6Imt1YmVybmV0ZXMvbWV0cmljcy1rdWJlLXN0YXRlLW1ldHJpY3MtZWQ5ZmQ1MmUtOGE5My00YjRmLWE0MDItZDU4ODRiNzNkNzI0IiwibmFtZSI6InZzMS1rOHMtcHJkMDIiLCJyZXZpc2lvbiI6MSwidHlwZSI6Imt1YmVybmV0ZXMvbWV0cmljcyJ9LHsiaWQiOiJmaWxlc3RyZWFtLWNvbnRhaW5lci1sb2dzLWVkOWZkNTJlLThhOTMtNGI0Zi1hNDAyLWQ1ODg0YjczZDcyNCIsIm5hbWUiOiJ2czEtazhzLXByZDAyIiwicmV2aXNpb24iOjEsInR5cGUiOiJmaWxlc3RyZWFtIn0seyJpZCI6ImZpbGVzdHJlYW0tYXVkaXQtbG9ncy1lZDlmZDUyZS04YTkzLTRiNGYtYTQwMi1kNTg4NGI3M2Q3MjQiLCJuYW1lIjoidnMxLWs4cy1wcmQwMiIsInJldmlzaW9uIjoxLCJ0eXBlIjoiZmlsZXN0cmVhbSJ9XX0=
signature: >-
MEUCIFwXZini70ymDozOqcL4DDIqr8cGB58SKsMZUyNi2QBWAiEAhTnUwUaLMalLxOBFNWVWEMCEvjIe1OHsW5T2844ySgc=
output_permissions:
default:
_elastic_agent_monitoring:
indices:
- names:
- logs-elastic_agent.apm_server-mynamespace
privileges: &ref_0
- auto_configure
- create_doc
- names:
- metrics-elastic_agent.apm_server-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.auditbeat-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.auditbeat-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.cloud_defend-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.cloudbeat-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.cloudbeat-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.elastic_agent-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.endpoint_security-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.endpoint_security-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.filebeat_input-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.filebeat_input-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.filebeat-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.filebeat-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.fleet_server-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.fleet_server-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.heartbeat-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.heartbeat-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.metricbeat-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.metricbeat-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.osquerybeat-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.osquerybeat-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.packetbeat-mynamespace
privileges: *ref_0
- names:
- metrics-elastic_agent.packetbeat-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.pf_elastic_collector-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.pf_elastic_symbolizer-mynamespace
privileges: *ref_0
- names:
- logs-elastic_agent.pf_host_agent-mynamespace
privileges: *ref_0
_elastic_agent_checks:
cluster:
- monitor
ed9fd52e-8a93-4b4f-a402-d5884b73d724:
indices:
- names:
- metrics-kubernetes.state_container-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_cronjob-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_daemonset-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_deployment-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_job-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_namespace-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_node-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_persistentvolume-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_persistentvolumeclaim-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_pod-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_replicaset-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_resourcequota-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_service-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_statefulset-mynamespace
privileges: *ref_0
- names:
- metrics-kubernetes.state_storageclass-mynamespace
privileges: *ref_0
- names:
- logs-*-*
privileges: *ref_0
- names:
- logs-kubernetes.audit_logs-mynamespace
privileges: *ref_0
---
# For more information refer https://www.elastic.co/guide/en/fleet/current/running-on-kubernetes-standalone.html
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent
namespace: kube-system
labels:
app: elastic-agent
spec:
selector:
matchLabels:
app: elastic-agent
template:
metadata:
labels:
app: elastic-agent
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
# Uncomment if using hints feature
#initContainers:
# - name: k8s-templates-downloader
# image: busybox:1.28
# command: ['sh']
# args:
# - -c
# - >-
# mkdir -p /etc/elastic-agent/inputs.d &&
# wget -O - https://github.com/elastic/elastic-agent/archive/8.14.tar.gz | tar xz -C /etc/elastic-agent/inputs.d --strip=5 "elastic-agent-8.14/deploy/kubernetes/elastic-agent/templates.d"
# volumeMounts:
# - name: external-inputs
# mountPath: /etc/elastic-agent/inputs.d
containers:
- name: elastic-agent
image: docker.elastic.co/beats/elastic-agent:8.14.3
args: ["-c", "/etc/elastic-agent/agent.yml", "-e"]
env:
# The basic authentication username used to connect to Elasticsearch
# This user needs the privileges required to publish events to Elasticsearch.
- name: ES_USERNAME
value: "elastic"
# The basic authentication password used to connect to Elasticsearch
- name: ES_PASSWORD
value: "changeme"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: STATE_PATH
value: "/etc/elastic-agent"
# The following ELASTIC_NETINFO:false variable will disable the netinfo.enabled option of add-host-metadata processor. This will remove fields host.ip and host.mac.
# For more info: https://www.elastic.co/guide/en/beats/metricbeat/current/add-host-metadata.html
- name: ELASTIC_NETINFO
value: "false"
securityContext:
runAsUser: 0
# The following capabilities are needed for 'Defend for containers' integration (cloud-defend)
# If you are using this integration, please uncomment these lines before applying.
#capabilities:
# add:
# - BPF # (since Linux 5.8) allows loading of BPF programs, create most map types, load BTF, iterate programs and maps.
# - PERFMON # (since Linux 5.8) allows attaching of BPF programs used for performance metrics and observability operations.
# - SYS_RESOURCE # Allow use of special resources or raising of resource limits. Used by 'Defend for Containers' to modify 'rlimit_memlock'
########################################################################################
# The following capabilities are needed for Universal Profiling.
# More fine graded capabilities are only available for newer Linux kernels.
# If you are using the Universal Profiling integration, please uncomment these lines before applying.
#procMount: "Unmasked"
#privileged: true
#capabilities:
# add:
# - SYS_ADMIN
resources:
limits:
cpu: 500m
memory: 700Mi
requests:
cpu: 100m
memory: 400Mi
volumeMounts:
- name: datastreams
mountPath: /etc/elastic-agent/agent.yml
readOnly: true
subPath: agent.yml
# Uncomment if using hints feature
#- name: external-inputs
# mountPath: /etc/elastic-agent/inputs.d
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: etc-full
mountPath: /hostfs/etc
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
- name: sys-kernel-debug
mountPath: /sys/kernel/debug
- name: elastic-agent-state
mountPath: /usr/share/elastic-agent/state
# If you are using the Universal Profiling integration, please uncomment these lines before applying.
#- name: universal-profiling-cache
# mountPath: /var/cache/Elastic
volumes:
- name: datastreams
configMap:
defaultMode: 0640
name: agent-node-datastreams
# Uncomment if using hints feature
#- name: external-inputs
# emptyDir: {}
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# The following volumes are needed for Cloud Security Posture integration (cloudbeat)
# If you are not using this integration, then these volumes and the corresponding
# mounts can be removed.
- name: etc-full
hostPath:
path: /etc
- name: var-lib
hostPath:
path: /var/lib
# Needed for 'Defend for containers' integration (cloud-defend) and Universal Profiling
# If you are not using one of these integrations, then these volumes and the corresponding
# mounts can be removed.
- name: sys-kernel-debug
hostPath:
path: /sys/kernel/debug
# Mount /var/lib/elastic-agent-managed/kube-system/state to store elastic-agent state
# Update 'kube-system' with the namespace of your agent installation
- name: elastic-agent-state
hostPath:
path: /var/lib/elastic-agent/kube-system/state
type: DirectoryOrCreate
# Mount required for Universal Profiling.
# If you are using the Universal Profiling integration, please uncomment these lines before applying.
#- name: universal-profiling-cache
# hostPath:
# path: /var/cache/Elastic
# type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: kube-system
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: Role
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: elastic-agent-kubeadm-config
namespace: kube-system
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: Role
name: elastic-agent-kubeadm-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
labels:
k8s-app: elastic-agent
rules:
- apiGroups: [""]
resources:
- nodes
- namespaces
- events
- pods
- services
- configmaps
# Needed for cloudbeat
- serviceaccounts
- persistentvolumes
- persistentvolumeclaims
verbs: ["get", "list", "watch"]
# Enable this rule only if planing to use kubernetes_secrets provider
#- apiGroups: [""]
# resources:
# - secrets
# verbs: ["get"]
- apiGroups: ["extensions"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
- deployments
- replicasets
- daemonsets
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
# Needed for apiserver
- nonResourceURLs:
- "/metrics"
verbs:
- get
# Needed for cloudbeat
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterrolebindings
- clusterroles
- rolebindings
- roles
verbs: ["get", "list", "watch"]
# Needed for cloudbeat
- apiGroups: ["policy"]
resources:
- podsecuritypolicies
verbs: ["get", "list", "watch"]
- apiGroups: [ "storage.k8s.io" ]
resources:
- storageclasses
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: elastic-agent
# Should be the namespace where elastic-agent is running
namespace: kube-system
labels:
k8s-app: elastic-agent
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: elastic-agent-kubeadm-config
namespace: kube-system
labels:
k8s-app: elastic-agent
rules:
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- kubeadm-config
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: kube-system
labels:
k8s-app: elastic-agent
---
Given this setup, I’m unsure if the issue lies with:
- The Elastic Agent configuration.
- The installation or configuration of Kube-State-Metrics.
- Some other component I might have missed.
Any insights or troubleshooting suggestions would be greatly appreciated! If anyone has encountered a similar issue, I’d love to hear how you resolved it.
Thanks in advance for your help!