Hallo,
I am used ELK version 8.5.0, i am want monitoring Microservice system, the microservice with base pyhton frame work Fast API and the microservice on docker and kubernetes ( OCP ), i have 100 more microservice so with metode installasi agent one by one on my microseervice is imposible or not efficient because to much.
And I try with a elastic agent by docker or OCP is not succes or not work, this attchment for setting elastic agent by kubernetes.
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-node-datastreams
namespace: cls-dev
labels:
k8s-app: elastic-agent
data:
agent.yml: |-
id: b975a220-fec5-11ed-9372-95f2b3598fbb
outputs:
default:
type: elasticsearch
hosts:
- 'http://IP Elastic:9200'
username: '${ES_USERNAME}'
password: '${ES_PASSWORD}'
inputs:
- id: kubernetes/metrics-kubelet-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: kubernetes/metrics
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes/metrics-kubernetes.container-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.container
metricsets:
- container
add_metadata: true
hosts:
- 'https://${env.NODE_NAME}:10250'
period: 10s
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: none
- id: >-
kubernetes/metrics-kubernetes.node-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.node
metricsets:
- node
add_metadata: true
hosts:
- 'https://${env.NODE_NAME}:10250'
period: 10s
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: none
- id: >-
kubernetes/metrics-kubernetes.pod-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.pod
metricsets:
- pod
add_metadata: true
hosts:
- 'https://${env.NODE_NAME}:10250'
period: 10s
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: none
- id: >-
kubernetes/metrics-kubernetes.system-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.system
metricsets:
- system
add_metadata: true
hosts:
- 'https://${env.NODE_NAME}:10250'
period: 10s
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: none
- id: >-
kubernetes/metrics-kubernetes.volume-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.volume
metricsets:
- volume
add_metadata: true
hosts:
- 'https://${env.NODE_NAME}:10250'
period: 10s
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: none
meta:
package:
name: kubernetes
version: 1.29.2
- id: >-
kubernetes/metrics-kube-state-metrics-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: kubernetes/metrics
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes/metrics-kubernetes.state_container-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_container
metricsets:
- state_container
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_cronjob-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_cronjob
metricsets:
- state_cronjob
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_daemonset-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_daemonset
metricsets:
- state_daemonset
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_deployment-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_deployment
metricsets:
- state_deployment
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_job-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_job
metricsets:
- state_job
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_node-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_node
metricsets:
- state_node
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_persistentvolume-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_persistentvolume
metricsets:
- state_persistentvolume
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_persistentvolumeclaim-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_persistentvolumeclaim
metricsets:
- state_persistentvolumeclaim
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_pod-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_pod
metricsets:
- state_pod
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_replicaset-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_replicaset
metricsets:
- state_replicaset
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_resourcequota-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_resourcequota
metricsets:
- state_resourcequota
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_service-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_service
metricsets:
- state_service
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_statefulset-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_statefulset
metricsets:
- state_statefulset
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- id: >-
kubernetes/metrics-kubernetes.state_storageclass-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.state_storageclass
metricsets:
- state_storageclass
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
meta:
package:
name: kubernetes
version: 1.29.2
- id: kubernetes/metrics-kube-apiserver-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: kubernetes/metrics
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes/metrics-kubernetes.apiserver-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.apiserver
metricsets:
- apiserver
hosts:
- >-
https://${env.KUBERNETES_SERVICE_HOST}:${env.KUBERNETES_SERVICE_PORT}
period: 30s
condition: '${kubernetes_leaderelection.leader} == true'
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
meta:
package:
name: kubernetes
version: 1.29.2
- id: kubernetes/metrics-kube-proxy-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: kubernetes/metrics
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes/metrics-kubernetes.proxy-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.proxy
metricsets:
- proxy
hosts:
- 'localhost:10249'
period: 10s
meta:
package:
name: kubernetes
version: 1.29.2
- id: kubernetes/metrics-kube-scheduler-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: kubernetes/metrics
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes/metrics-kubernetes.scheduler-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.scheduler
metricsets:
- scheduler
hosts:
- 'https://0.0.0.0:10259'
period: 10s
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: none
condition: '${kubernetes.labels.component} == ''kube-scheduler'''
meta:
package:
name: kubernetes
version: 1.29.2
- id: >-
kubernetes/metrics-kube-controller-manager-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: kubernetes/metrics
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes/metrics-kubernetes.controllermanager-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.controllermanager
metricsets:
- controllermanager
hosts:
- 'https://0.0.0.0:10257'
period: 10s
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: none
condition: '${kubernetes.labels.component} == ''kube-controller-manager'''
meta:
package:
name: kubernetes
version: 1.29.2
- id: kubernetes/metrics-events-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: kubernetes/metrics
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes/metrics-kubernetes.event-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: metrics
dataset: kubernetes.event
metricsets:
- event
period: 10s
add_metadata: true
skip_older: true
condition: '${kubernetes_leaderelection.leader} == true'
meta:
package:
name: kubernetes
version: 1.29.2
- id: filestream-container-logs-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: filestream
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
kubernetes-container-logs-${kubernetes.pod.name}-${kubernetes.container.id}
data_stream:
type: logs
dataset: kubernetes.container_logs
paths:
- '/var/log/containers/*${kubernetes.container.id}.log'
prospector.scanner.symlinks: true
parsers:
- container:
stream: all
format: auto
meta:
package:
name: kubernetes
version: 1.29.2
- id: filestream-audit-logs-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
revision: 1
name: kubernetes-1
type: filestream
data_stream:
namespace: cls-dev
use_output: cls-dev
package_policy_id: 8594aef4-88bf-400c-b38d-8fa79f7ba4f9
streams:
- id: >-
filestream-kubernetes.audit_logs-8594aef4-88bf-400c-b38d-8fa79f7ba4f9
data_stream:
type: logs
dataset: kubernetes.audit_logs
paths:
- /var/log/kubernetes/kube-apiserver-audit.log
exclude_files:
- .gz$
parsers:
- ndjson:
add_error_key: true
target: kubernetes_audit
processors:
- rename:
fields:
- from: kubernetes_audit
to: kubernetes.audit
- drop_fields:
when:
has_fields: kubernetes.audit.responseObject
fields:
- kubernetes.audit.responseObject.metadata
- drop_fields:
when:
has_fields: kubernetes.audit.requestObject
fields:
- kubernetes.audit.requestObject.metadata
- script:
lang: javascript
id: dedot_annotations
source: |
function process(event) {
var audit = event.Get("kubernetes.audit");
for (var annotation in audit["annotations"]) {
var annotation_dedoted = annotation.replace(/./g,'_')
event.Rename("kubernetes.audit.annotations."+annotation, "kubernetes.audit.annotations."+annotation_dedoted)
}
return event;
} function test() {
var event = process(new Event({ "kubernetes": { "audit": { "annotations": { "authorization.k8s.io/decision": "allow", "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding "system:kube-scheduler" of ClusterRole "system:kube-scheduler" to User "system:kube-scheduler"" } } } }));
if (event.Get("kubernetes.audit.annotations.authorization_k8s_io/decision") !== "allow") {
throw "expected kubernetes.audit.annotations.authorization_k8s_io/decision === allow";
}
}
meta:
package:
name: kubernetes
version: 1.29.2
revision: 2
agent:
download:
source_uri: 'https://artifacts.elastic.co/downloads/'
monitoring:
namespace: cls-dev
use_output: cls-dev
enabled: true
logs: true
metrics: true
output_permissions:
cls-dev:
_elastic_agent_monitoring:
indices:
- names:
- logs-elastic_agent.apm_server-cls-dev
privileges: &ref_0
- auto_configure
- create_doc
- names:
- metrics-elastic_agent.apm_server-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.auditbeat-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.auditbeat-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.cloud_defend-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.cloudbeat-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.cloudbeat-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.elastic_agent-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.endpoint_security-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.endpoint_security-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.filebeat_input-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.filebeat_input-cls-devt
privileges: *ref_0
- names:
- logs-elastic_agent.filebeat-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.filebeat-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.fleet_server-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.fleet_server-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.heartbeat-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.heartbeat-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.metricbeat-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.metricbeat-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.osquerybeat-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.osquerybeat-cls-dev
privileges: *ref_0
- names:
- logs-elastic_agent.packetbeat-cls-dev
privileges: *ref_0
- names:
- metrics-elastic_agent.packetbeat-cls-dev
privileges: *ref_0
_elastic_agent_checks:
cluster:
- monitor
8594aef4-88bf-400c-b38d-8fa79f7ba4f9:
indices:
- names:
- metrics-kubernetes.container-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.node-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.pod-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.system-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.volume-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_container-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_cronjob-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_daemonset-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_deployment-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_job-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_node-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_persistentvolume-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_persistentvolumeclaim-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_pod-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_replicaset-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_resourcequota-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_service-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_statefulset-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.state_storageclass-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.apiserver-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.proxy-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.scheduler-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.controllermanager-cls-dev
privileges: *ref_0
- names:
- metrics-kubernetes.event-cls-dev
privileges: *ref_0
- names:
- logs-kubernetes.container_logs-cls-dev
privileges: *ref_0
- names:
- logs-kubernetes.audit_logs-cls-dev
privileges: *ref_0
For more information refer Run Elastic Agent Standalone on Kubernetes | Fleet and Elastic Agent Guide [8.8] | Elastic
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent
namespace: cls-dev
labels:
app: elastic-agent
spec:
selector:
matchLabels:
app: elastic-agent
template:
metadata:
labels:
app: elastic-agent
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent
image: docker.elastic.co/beats/elastic-agent:8.5.0
args: [
"-c", "/etc/agent.yml",
"-e",
]
env:
# The basic authentication username used to connect to Elasticsearch
# This user needs the privileges required to publish events to Elasticsearch.
- name: ES_USERNAME
value: "elastic"
# The basic authentication password used to connect to Elasticsearch
- name: ES_PASSWORD
value: "**********"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
securityContext:
runAsUser: 0
resources:
limits:
memory: 700Mi
requests:
cpu: 100m
memory: 400Mi
volumeMounts:
- name: datastreams
mountPath: /etc/agent.yml
readOnly: true
subPath: agent.yml
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: etc-kubernetes
mountPath: /hostfs/etc/kubernetes
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
- name: passwd
mountPath: /hostfs/etc/passwd
readOnly: true
- name: group
mountPath: /hostfs/etc/group
readOnly: true
- name: etcsysmd
mountPath: /hostfs/etc/systemd
readOnly: true
volumes:
- name: datastreams
configMap:
defaultMode: 0640
name: agent-node-datastreams
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# Needed for cloudbeat
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
# Needed for cloudbeat
- name: var-lib
hostPath:
path: /var/lib
# Needed for cloudbeat
- name: passwd
hostPath:
path: /etc/passwd
# Needed for cloudbeat
- name: group
hostPath:
path: /etc/group
# Needed for cloudbeat
- name: etcsysmd
hostPath:
path: /etc/systemd
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: cls-dev
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: cls-dev
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: cls-dev
roleRef:
kind: Role
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: elastic-agent-kubeadm-config
namespace: cls-dev
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: cls-dev
roleRef:
kind: Role
name: elastic-agent-kubeadm-config
apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
labels:
k8s-app: elastic-agent
rules:
- apiGroups: [""]
resources:- nodes
- namespaces
- events
- pods
- services
- configmaps
Needed for cloudbeat
- serviceaccounts
- persistentvolumes
- persistentvolumeclaims
verbs: ["get", "list", "watch"]
Enable this rule only if planing to use kubernetes_secrets provider
#- apiGroups: [""]
resources:
- secrets
verbs: ["get"]
- apiGroups: ["extensions"]
resources:- replicasets
verbs: ["get", "list", "watch"]
- replicasets
- apiGroups: ["apps"]
resources:- statefulsets
- deployments
- replicasets
- daemonsets
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources:- jobs
- cronjobs
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources: - nodes/stats
verbs: - get
- ""
Needed for apiserver
- nonResourceURLs:
- "/metrics"
verbs: - get
- "/metrics"
Needed for cloudbeat
- apiGroups: ["rbac.authorization.k8s.io"]
resources:- clusterrolebindings
- clusterroles
- rolebindings
- roles
verbs: ["get", "list", "watch"]
Needed for cloudbeat
- apiGroups: ["policy"]
resources:- podsecuritypolicies
verbs: ["get", "list", "watch"]
- podsecuritypolicies
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: elastic-agent
Should be the namespace where elastic-agent is running
namespace: cls-dev
labels:
k8s-app: elastic-agent
rules:
- apiGroups:
-
coordination.k8s.io
resources: - leases
verbs: ["get", "create", "update"]
-
coordination.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: elastic-agent-kubeadm-config
namespace: cls-dev
labels:
k8s-app: elastic-agent
rules:
- apiGroups: [""]
resources:- configmaps
resourceNames: - kubeadm-config
verbs: ["get"]
- configmaps
apiVersion: v1
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: cls-dev
labels:
k8s-app: elastic-agent
And I have also tried using filebeat to fetch data by log microservice, but we can't retrieve a status condition whether the microservice is dead or alive
Thanks you very much, and I hope to get the best answer or suggestion / solution to my problem