Hi all.
Thank you all for taking the time to read this.
Although I haven't se any answer I had a notification from Warcolm. Which I appreciate trying to help.
Anyway. As I couldn't make this work I tried a different approch based on the samples files which worked for me and I want to share with you.
I took the 'Metricbeat for Kubernetes Monitoring' configuration sample called that you can find in:
yaml Samples to deploy ECK.
I simply changed it a little bit to have a total of four nodes, two master nodes and two multipurpose, data, ingest and ml nodes.
I also added the part related to filebeat with autodiscovery creating a all-in-one fille which I named eck.yaml with the following content:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat
spec:
type: metricbeat
version: 8.5.2
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
metricbeat:
autodiscover:
providers:
- hints:
default_config: {}
enabled: "true"
node: ${NODE_NAME}
type: kubernetes
modules:
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
process:
include_top_n:
by_cpu: 5
by_memory: 5
processes:
- .*
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event:
when:
regexp:
system:
filesystem:
mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib)($|/)
- module: kubernetes
period: 10s
node: ${NODE_NAME}
hosts:
- https://${NODE_NAME}:10250
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl:
verification_mode: none
metricsets:
- node
- system
- pod
- container
- volume
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: metricbeat
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
containers:
- args:
- -e
- -c
- /etc/beat.yml
- -system.hostfs=/hostfs
name: metricbeat
volumeMounts:
- mountPath: /hostfs/sys/fs/cgroup
name: cgroup
- mountPath: /var/run/docker.sock
name: dockersock
- mountPath: /hostfs/proc
name: proc
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /sys/fs/cgroup
name: cgroup
- hostPath:
path: /var/run/docker.sock
name: dockersock
- hostPath:
path: /proc
name: proc
---
# permissions needed for metricbeat
# source: https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- events
- pods
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: eck-home
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: eck-home
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 8.5.2
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: eck-home
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.5.2
nodeSets:
- name: master
count: 2
config:
node.roles: master
node.store.allow_mmap: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
podTemplate:
metadata:
annotations:
# Para evitar que el log se automonitorice a si mismo y cree logs a partir de lo que ya contienen sus logs. Bucle infinito
co.elastic.logs/enabled: "false"
- name: data
count: 2
config:
node.roles: ["data", "ingest", "ml"]
node.store.allow_mmap: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
podTemplate:
metadata:
annotations:
# Para evitar que el log se automonitorice a si mismo y cree logs a partir de lo que ya contienen sus logs. Bucle infinito
co.elastic.logs/enabled: "false"
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 8.5.2
count: 1
elasticsearchRef:
name: elasticsearch
...
I also downloaded the operator
wget https://download.elastic.co/downloads/eck/2.5.0/operator.yaml
And I replaced all 'default' namespaces by 'eck-home' in the operator.yaml file
I created the namespace called 'eck-home'.
I did this because I wanted to deploy ECK in a different namespace than 'default' and check if things could work well.
After that, the steps I run were:
kubectl create -f https://download.elastic.co/downloads/eck/2.5.0/crds.yaml -n eck-home
kubectl apply -f operator.yaml -n eck-home
And
kubectl apply -f eck.yaml -n eck-home
Everything seemed to work well. I've got metricbeat data about the pods, the containers, the K8s nodes, not all dasboards show data but I don't see errors in the documents from Kibana -> Discover.
I also can find document from the filebeat-* dataview that matches with the kubectl logs of my pods. I also deployed the version 8.4.2 and changed afterwards the version to 8.5.2 and the operator did right the update.
So far so good. Maybe in the future I could find strange behaviors and I'll let you know about it, but for now everything is right.
Hope it can help someone. Thank you again.
Regards.
Carlos T.