Problems to deploy metricbeat in a ECK environment running on Minikube

Hi all.

After installing Minikube in a Ubuntu VM and creating a single Masternode K8s cluster I've been able to deploy ES and Kibana by using the quickstart examples that can be found in Elastic Offitial Documentation.

I've created a 2 Master Nodes + 2 Data Nodes by using the following .yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 8.4.2
  nodeSets:
  - name: master
    count: 2
    config:
      node.roles: master
      node.store.allow_mmap: false
      xpack.monitoring.collection.enabled: true
      xpack.monitoring.elasticsearch.collection.enabled: false
    podTemplate:
      metadata:
        annotations:
          # Para evitar que el log se automonitorice a si mismo y cree logs a partir de lo que ya contienen sus logs. Bucle infinito
          co.elastic.logs/enabled: "false"
  - name: data
    count: 2
    config:
      node.roles: ["data", "ingest", "ml"]
      node.store.allow_mmap: false

A Kibana with this .yaml

root@ubuntudocker:/elk# vi kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 8.4.2
  count: 1
  elasticsearchRef:
    name: elasticsearch
  http:
    service:
      spec:
        type: LoadBalancer
  config:
    monitoring.kibana.collection.enabled: false
    # Recomendado cuando ES corre en contenedores
    monitoring.ui.container.elasticsearch.enabled: true
  podTemplate:
    metadata:
      annotations:
        co.elastic.logs/fileset: "log" # audit fileset will duplicate all logs
    spec:
      # tolerations (irrelevant if tains are not configured in k8s nodes)
      tolerations:
        - key: "area"
          operator: "Equal"
          value: "monitoring"
          effect: "NoSchedule"

After that, I tried to deploy MetricBeat so I get metrics about the Pods, volumes, etc... contained in my single node cluster of K8s Minikube.
Said that. First thing I did was installing kube-state-metrics. Once it was done, and after forwarding data to the VM Ubuntu --> ** kubectl port-forward svc/kube-state-metrics 30135:8080 -n kube-system &**

I was capable to see metrics with a browser from the Ubuntu VM Server where the Kubernetes container runs: http://localhost:30135/metrics

And I get data like the following:

kube_persistentvolumeclaim_labels{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-0"} 1
kube_persistentvolumeclaim_labels{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-1"} 1
kube_persistentvolumeclaim_labels{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-0"} 1
kube_persistentvolumeclaim_labels{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-1"} 1
HELP kube_persistentvolumeclaim_annotations Kubernetes annotations converted to Prometheus labels.
TYPE kube_persistentvolumeclaim_annotations gauge
kube_persistentvolumeclaim_annotations{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-0"} 1
kube_persistentvolumeclaim_annotations{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-1"} 1
kube_persistentvolumeclaim_annotations{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-0"} 1
kube_persistentvolumeclaim_annotations{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-1"} 1
HELP kube_persistentvolumeclaim_info [STABLE] Information about persistent volume claim.
TYPE kube_persistentvolumeclaim_info gauge
kube_persistentvolumeclaim_info{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-0",storageclass="standard",volumename="pvc-73d68560-097a-4e82-8154-f05023c795eb"} 1
kube_persistentvolumeclaim_info{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-1",storageclass="standard",volumename="pvc-b531f768-2e86-4268-abb3-9f4c7820e5cf"} 1
kube_persistentvolumeclaim_info{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-0",storageclass="standard",volumename="pvc-344c7f37-b041-4e27-9217-339fe3a437f1"} 1
kube_persistentvolumeclaim_info{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-1",storageclass="standard",volumename="pvc-634af398-0ea3-433a-87d5-f571c69d8623"} 1
HELP kube_persistentvolumeclaim_status_phase [STABLE] The phase the persistent volume claim is currently in.
TYPE kube_persistentvolumeclaim_status_phase gauge
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-0",phase="Lost"} 0
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-0",phase="Bound"} 1
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-0",phase="Pending"} 0
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-1",phase="Lost"} 0
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-1",phase="Bound"} 1
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-1",phase="Pending"} 0
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-0",phase="Lost"} 0
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-0",phase="Bound"} 1
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-0",phase="Pending"} 0
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-1",phase="Lost"} 0
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-1",phase="Bound"} 1
kube_persistentvolumeclaim_status_phase{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-master-1",phase="Pending"} 0
HELP kube_persistentvolumeclaim_resource_requests_storage_bytes [STABLE] The capacity of storage requested by the persistent volume claim.
TYPE kube_persistentvolumeclaim_resource_requests_storage_bytes gauge
kube_persistentvolumeclaim_resource_requests_storage_bytes{namespace="default",persistentvolumeclaim="elasticsearch-data-elasticsearch-es-data-1

With kube-state-metrics running and apparently working well I tried to find a 'quickstart' example for metricbeat but despite I've looked in both:
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-quickstart.html
and
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-quickstart.htm
I've just found examples for Filebeat and Heartbeat but not for Metricbeat.

What I found instead here:
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration-examples.html

Is the possibility of creating a whole ES environment that uses the following manifiest:
**

kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.5/config/recipes/beats/metricbeat_hosts.yaml

**

What I did was downloading that .yaml and extracting the metribeat part from it . Afterwards I applied my metric.yalm resulting file that looked like this:

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: metricbeat
spec:
  type: metricbeat
  version: 8.4.2
  elasticsearchRef:
    name: elasticsearch
  kibanaRef:
    name: kibana
  config:
    metricbeat:
      autodiscover:
        providers:
        - hints:
            default_config: {}
            enabled: "true"
          node: ${NODE_NAME}
          type: kubernetes
      modules:
      - module: system
        period: 10s
        metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        process:
          include_top_n:
            by_cpu: 5
            by_memory: 5
        processes:
        - .*
      - module: system
        period: 1m
        metricsets:
        - filesystem
        - fsstat
        processors:
        - drop_event:
            when:
              regexp:
                system:
                  filesystem:
                    mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib)($|/)
      - module: kubernetes
        period: 10s
        node: ${NODE_NAME}
        hosts:
        - https://${NODE_NAME}:10250
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        ssl:
          verification_mode: none
        metricsets:
        - node
        - system
        - pod
        - container
        - volume
    processors:
    - add_cloud_metadata: {}
    - add_host_metadata: {}
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: metricbeat
        automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
        containers:
        - args:
          - -e
          - -c
          - /etc/beat.yml
          - -system.hostfs=/hostfs
          name: metricbeat
          volumeMounts:
          - mountPath: /hostfs/sys/fs/cgroup
            name: cgroup
          - mountPath: /var/run/docker.sock
            name: dockersock
          - mountPath: /hostfs/proc
            name: proc
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true # Allows to provide richer host metadata
        securityContext:
          runAsUser: 0
        terminationGracePeriodSeconds: 30
        volumes:
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroup
        - hostPath:
            path: /var/run/docker.sock
          name: dockersock
        - hostPath:
            path: /proc
          name: proc
---
# permissions needed for metricbeat
# source: https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metricbeat
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  - events
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - deployments
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: default

Once it was applied, a new pod ('metricbeat-beat-metricbeat-j68zf' and a new index were created and documents were indexed. But after taking a look in Kibana -> Discover. I can see that I have only recieved documents from (I guess) the system module containing fields like system.cpu.cores or similar, but nothing like kubernetes.something. As you can guess my metricbeat system dashboard represented graphs but those related to Kubernetes remained empty.
I tried to get inside of the metribeat bot
kubectl exec --stdin --tty metricbeat-beat-metricbeat-j68zf -- /bin/bash
And it got me into the /usr/share/metricbeat of my minikube container and the directory /usr/share/metricbeat/logs is empty (I guess this was stupid as I'm new in K8s)

I runned also the -> kubectl logs metricbeat-beat-metricbeat-j68zf

*W1208 17:28:31.714431 7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
E1208 17:28:31.714468 7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
W1208 17:28:36.683537 7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
E1208 17:28:36.683570 7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
{"log.level":"error","@timestamp":"2022-12-08T17:28:36.911Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-12-08T17:28:40.056Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-12-08T17:28:46.909Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-12-08T17:28:50.060Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-12-08T17:28:53.714Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":185},"message":"Non-zero metrics in the last 30s","service.name":"metricbeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":-9756672}}}},"cpu":{"system":{"ticks":15180,"time":{"ms":160}},"total":{"ticks":38110,"time":{"ms":270},"value":38110},"user":{"ticks":22930,"time":{"ms":110}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":10},"info":{"ephemeral_id":"9b122d95-00b3-4474-bb31-905ef3a85de3","uptime":{"ms":3846001},"version":"8.4.2"},"memstats":{"gc_next":24140080,"memory_alloc":22178544,"memory_total":2153241688,"rss":195506176},"runtime":{"goroutines":119}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":66,"active":0,"batches":18,"total":66},"read":{"bytes":16929},"write":{"bytes":87130}},"pipeline":{"clients":13,"events":{"active":0,"published":66,"total":66},"queue":{"acked":66}}},"metricbeat":{"kubernetes":{"system":{"events":3,"failures":3},"volume":{"events":3,"failures":3}},"system":{"cpu":{"events":3,"success":3},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":42,"success":42},"process":{"events":6,"success":6},"process_summary":{"events":3,"success":3}}},"system":{"load":{"1":0.61,"15":0.91,"5":0.97,"norm":{"1":0.0381,"15":0.0569,"5":0.0606}}}},"ecs.version":"1.6.0"}}
{"log.level":"error","@timestamp":"2022-12-08T17:28:56.909Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
W1208 17:28:59.359600 7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
E1208 17:28:59.359643 7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
{"log.level":"error","@timestamp":"2022-12-08T17:29:00.059Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
W1208 17:29:05.603426 7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
E1208 17:29:05.603460 7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
{"log.level":"error","@timestamp":"2022-12-08T17:29:06.909Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
W1208 17:29:07.696492 7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
E1208 17:29:07.696530 7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: nodes "minikube" is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
{"log.level":"error","@timestamp":"2022-12-08T17:29:10.059Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-12-08T17:29:16.909Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-12-08T17:29:20.056Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
W1208 17:29:22.609017 7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
E1208 17:29:22.609059 7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list v1.Node: nodes is forbidden: User "system:serviceaccount:default:metricbeat" cannot list resource "nodes" in API group "" at the cluster scope
{"log.level":"info","@timestamp":"2022-12-08T17:29:23.713Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":185},"message":"Non-zero metrics in the last 30s","service.name":"metricbeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":-16142336}}}},"cpu":{"system":{"ticks":15330,"time":{"ms":150}},"total":{"ticks":38400,"time":{"ms":290},"value":38400},"user":{"ticks":23070,"time":{"ms":140}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":10},"info":{"ephemeral_id":"9b122d95-00b3-4474-bb31-905ef3a85de3","uptime":{"ms":3875999},"version":"8.4.2"},"memstats":{"gc_next":25431712,"memory_alloc":15002576,"memory_total":2169047624,"rss":196845568},"runtime":{"goroutines":119}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":67,"active":0,"batches":18,"total":67},"read":{"bytes":17147},"write":{"bytes":88222}},"pipeline":{"clients":13,"events":{"active":0,"filtered":1,"published":67,"total":68},"queue":{"acked":67}}},"metricbeat":{"kubernetes":{"system":{"events":3,"failures":3},"volume":{"events":3,"failures":3}},"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":1,"success":1},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":42,"success":42},"process":{"events":6,"success":6},"process_summary":{"events":3,"success":3}}},"system":{"load":{"1":0.91,"15":0.92,"5":1.01,"norm":{"1":0.0569,"15":0.0575,"5":0.0631}}}},"ecs.version":"1.6.0"}}
{"log.level":"error","@timestamp":"2022-12-08T17:29:26.912Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-12-08T17:29:30.056Z","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 403 in : 403 Forbidden","service.name":"metricbeat","ecs.version":"1.6.0"}

And this is what is happening to me. Can someone please advise about what to do?
Thank you for having the patience to read it all and for your help.

Best regards.

Carlos T.

Hi all.

Thank you all for taking the time to read this.
Although I haven't se any answer I had a notification from Warcolm. Which I appreciate trying to help.

Anyway. As I couldn't make this work I tried a different approch based on the samples files which worked for me and I want to share with you.

I took the 'Metricbeat for Kubernetes Monitoring' configuration sample called that you can find in:
yaml Samples to deploy ECK.

I simply changed it a little bit to have a total of four nodes, two master nodes and two multipurpose, data, ingest and ml nodes.

I also added the part related to filebeat with autodiscovery creating a all-in-one fille which I named eck.yaml with the following content:

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: metricbeat
spec:
  type: metricbeat
  version: 8.5.2
  elasticsearchRef:
    name: elasticsearch
  kibanaRef:
    name: kibana
  config:
    metricbeat:
      autodiscover:
        providers:
        - hints:
            default_config: {}
            enabled: "true"
          node: ${NODE_NAME}
          type: kubernetes
      modules:
      - module: system
        period: 10s
        metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        process:
          include_top_n:
            by_cpu: 5
            by_memory: 5
        processes:
        - .*
      - module: system
        period: 1m
        metricsets:
        - filesystem
        - fsstat
        processors:
        - drop_event:
            when:
              regexp:
                system:
                  filesystem:
                    mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib)($|/)
      - module: kubernetes
        period: 10s
        node: ${NODE_NAME}
        hosts:
        - https://${NODE_NAME}:10250
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        ssl:
          verification_mode: none
        metricsets:
        - node
        - system
        - pod
        - container
        - volume
    processors:
    - add_cloud_metadata: {}
    - add_host_metadata: {}
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: metricbeat
        automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
        containers:
        - args:
          - -e
          - -c
          - /etc/beat.yml
          - -system.hostfs=/hostfs
          name: metricbeat
          volumeMounts:
          - mountPath: /hostfs/sys/fs/cgroup
            name: cgroup
          - mountPath: /var/run/docker.sock
            name: dockersock
          - mountPath: /hostfs/proc
            name: proc
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true # Allows to provide richer host metadata
        securityContext:
          runAsUser: 0
        terminationGracePeriodSeconds: 30
        volumes:
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroup
        - hostPath:
            path: /var/run/docker.sock
          name: dockersock
        - hostPath:
            path: /proc
          name: proc
---
# permissions needed for metricbeat
# source: https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metricbeat
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  - events
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - deployments
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: eck-home
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: eck-home
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: filebeat
spec:
  type: filebeat
  version: 8.5.2
  elasticsearchRef:
    name: elasticsearch
  kibanaRef:
    name: kibana
  config:
    filebeat:
      autodiscover:
        providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints:
            enabled: true
            default_config:
              type: container
              paths:
              - /var/log/containers/*${data.kubernetes.container.id}.log
    processors:
    - add_cloud_metadata: {}
    - add_host_metadata: {}
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: filebeat
        automountServiceAccountToken: true
        terminationGracePeriodSeconds: 30
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true # Allows to provide richer host metadata
        containers:
        - name: filebeat
          securityContext:
            runAsUser: 0
            # If using Red Hat OpenShift uncomment this:
            #privileged: true
          volumeMounts:
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
        volumes:
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: eck-home
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 8.5.2
  nodeSets:
  - name: master
    count: 2
    config:
      node.roles: master
      node.store.allow_mmap: false
      xpack.monitoring.collection.enabled: true
      xpack.monitoring.elasticsearch.collection.enabled: false
    podTemplate:
      metadata:
        annotations:
          # Para evitar que el log se automonitorice a si mismo y cree logs a partir de lo que ya contienen sus logs. Bucle infinito
          co.elastic.logs/enabled: "false"
  - name: data
    count: 2
    config:
      node.roles: ["data", "ingest", "ml"]
      node.store.allow_mmap: false
      xpack.monitoring.collection.enabled: true
      xpack.monitoring.elasticsearch.collection.enabled: false
    podTemplate:
      metadata:
        annotations:
          # Para evitar que el log se automonitorice a si mismo y cree logs a partir de lo que ya contienen sus logs. Bucle infinito
          co.elastic.logs/enabled: "false"

---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 8.5.2
  count: 1
  elasticsearchRef:
    name: elasticsearch
...

I also downloaded the operator
wget https://download.elastic.co/downloads/eck/2.5.0/operator.yaml

And I replaced all 'default' namespaces by 'eck-home' in the operator.yaml file
I created the namespace called 'eck-home'.

I did this because I wanted to deploy ECK in a different namespace than 'default' and check if things could work well.

After that, the steps I run were:
kubectl create -f https://download.elastic.co/downloads/eck/2.5.0/crds.yaml -n eck-home

kubectl apply -f operator.yaml -n eck-home

And

kubectl apply -f eck.yaml -n eck-home

Everything seemed to work well. I've got metricbeat data about the pods, the containers, the K8s nodes, not all dasboards show data but I don't see errors in the documents from Kibana -> Discover.
I also can find document from the filebeat-* dataview that matches with the kubectl logs of my pods. I also deployed the version 8.4.2 and changed afterwards the version to 8.5.2 and the operator did right the update.
So far so good. Maybe in the future I could find strange behaviors and I'll let you know about it, but for now everything is right.

Hope it can help someone. Thank you again.

Regards.

Carlos T.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.