Kubernetes integration does not work on Elasticsearch/Kibana ver. 8.6.2 (SSL issue)

Hello team,

I have a test environment set up that hosts a 3 node k8s cluster, Elasticsearch and Kibana. ELK is installed on a separate server. ELK works without problems and the integration with Kuberentes was done according to the instructions, the Deploymet file was created, Agents, Policies ..

Deploymet was installed and active on the K8s cluster. Elastic agents have been created on the k8s cluster and are all active on all nodes.

  • K8s cluster and Elastic are in the same VLAN and are network visible
  • Elastic and Kibana have SSL set (xpack.security.enabled: true)
  • all traffic goes via https
  • kube-state-metrics on the k8s cluster is set and active (everything is OK)
  • elastic agents on the k8s cluster is set and active (on each node, running)

In Kibana, I can see all the Kubernetes Dashboards, but no data from the k8s cluster (metrics and logs) is displayed. Indexes are not displayed in Index Management either.

Below is the log from Elasticsearch and the Deployment file that was installed on the k8s cluster. Deployment is active on the k8s cluster.

I would like to ask for help in solving this problem.

Thank you very much!!!

********** Elasticsearch log_begin************

[2023-02-23T11:06:00,384][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/10.0.87.200:9200, remoteAddress=/10.0.87.100:33982}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:358) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:286) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:519) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:458) ~[?:?]
        ... 16 more
[2023-02-23T11:06:00,629][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.87.200:9200, remoteAddress=/10.0.87.200:53866}
[2023-02-23T11:06:02,047][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.87.200:9200, remoteAddress=/10.0.87.102:51844}

********** Elasticsearch log_end************
********** DaemonSet_begin************

apiVersion: v1
kind: ConfigMap
metadata:
  name: agent-node-datastreams
  namespace: kube-system
  labels:
    k8s-app: elastic-agent
data:
  agent.yml: |-
    id: 057711e0-b2a9-11ed-8ff5-89f256e4633f
    outputs:
      default:
        type: elasticsearch
        hosts:
          - 'https://10.0.87.200:9200'
        username: '${ES_USERNAME}'
        password: '${ES_PASSWORD}'
    inputs:
      - id: logfile-system-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
        revision: 1
        name: system-1
        type: logfile
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 595b62d6-eda2-4bf6-bc55-4f2cc05160ea
        streams:
          - id: logfile-system.auth-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: logs
              dataset: system.auth
            ignore_older: 72h
            paths:
              - /var/log/auth.log*
              - /var/log/secure*
            exclude_files:
              - .gz$
            multiline:
              pattern: ^\s
              match: after
            tags:
              - system-auth
            processors:
              - add_locale: null
          - id: logfile-system.syslog-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: logs
              dataset: system.syslog
            paths:
              - /var/log/messages*
              - /var/log/syslog*
            exclude_files:
              - .gz$
            multiline:
              pattern: ^\s
              match: after
            processors:
              - add_locale: null
            ignore_older: 72h
        meta:
          package:
            name: system
            version: 1.24.2
      - id: winlog-system-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
        revision: 1
        name: system-1
        type: winlog
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 595b62d6-eda2-4bf6-bc55-4f2cc05160ea
        streams:
          - id: winlog-system.application-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: logs
              dataset: system.application
            name: Application
            condition: '${host.platform} == ''windows'''
            ignore_older: 72h
          - id: winlog-system.security-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: logs
              dataset: system.security
            name: Security
            condition: '${host.platform} == ''windows'''
            ignore_older: 72h
          - id: winlog-system.system-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: logs
              dataset: system.system
            name: System
            condition: '${host.platform} == ''windows'''
            ignore_older: 72h
        meta:
          package:
            name: system
            version: 1.24.2
      - id: system/metrics-system-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
        revision: 1
        name: system-1
        type: system/metrics
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 595b62d6-eda2-4bf6-bc55-4f2cc05160ea
        streams:
          - id: system/metrics-system.fsstat-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.fsstat
            metricsets:
              - fsstat
            period: 1m
            processors:
              - drop_event.when.regexp:
                  system.fsstat.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)
          - id: system/metrics-system.diskio-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.diskio
            metricsets:
              - diskio
            diskio.include_devices: null
            period: 10s
          - id: system/metrics-system.process-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.process
            metricsets:
              - process
            period: 10s
            process.include_top_n.by_cpu: 5
            process.include_top_n.by_memory: 5
            process.cmdline.cache.enabled: true
            process.cgroups.enabled: false
            process.include_cpu_ticks: false
            processes:
              - .*
          - id: system/metrics-system.load-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.load
            metricsets:
              - load
            condition: '${host.platform} != ''windows'''
            period: 10s
          - id: >-
              system/metrics-system.process.summary-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.process.summary
            metricsets:
              - process_summary
            period: 10s
          - id: system/metrics-system.memory-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.memory
            metricsets:
              - memory
            period: 10s
          - id: system/metrics-system.network-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.network
            metricsets:
              - network
            period: 10s
            network.interfaces: null
          - id: >-
              system/metrics-system.filesystem-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.filesystem
            metricsets:
              - filesystem
            period: 1m
            processors:
              - drop_event.when.regexp:
                  system.filesystem.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)
          - id: system/metrics-system.cpu-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.cpu
            metricsets:
              - cpu
            cpu.metrics:
              - percentages
              - normalized_percentages
            period: 10s
          - id: >-
              system/metrics-system.socket_summary-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.socket_summary
            metricsets:
              - socket_summary
            period: 10s
          - id: system/metrics-system.uptime-595b62d6-eda2-4bf6-bc55-4f2cc05160ea
            data_stream:
              type: metrics
              dataset: system.uptime
            metricsets:
              - uptime
            period: 10s
        meta:
          package:
            name: system
            version: 1.24.2
      - id: kubernetes/metrics-kubelet-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        revision: 1
        name: k8s_local
        type: kubernetes/metrics
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        streams:
          - id: >-
              kubernetes/metrics-kubernetes.container-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.container
            metricsets:
              - container
            add_metadata: true
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            ssl.verification_mode: none
          - id: >-
              kubernetes/metrics-kubernetes.node-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.node
            metricsets:
              - node
            add_metadata: true
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            ssl.verification_mode: none
          - id: >-
              kubernetes/metrics-kubernetes.pod-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.pod
            metricsets:
              - pod
            add_metadata: true
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            ssl.verification_mode: none
          - id: >-
              kubernetes/metrics-kubernetes.system-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.system
            metricsets:
              - system
            add_metadata: true
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            ssl.verification_mode: none
          - id: >-
              kubernetes/metrics-kubernetes.volume-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.volume
            metricsets:
              - volume
            add_metadata: true
            hosts:
              - 'https://${env.NODE_NAME}:10250'
            period: 10s
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            ssl.verification_mode: none
        meta:
          package:
            name: kubernetes
            version: 1.31.2
      - id: >-
          kubernetes/metrics-kube-state-metrics-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        revision: 1
        name: k8s_local
        type: kubernetes/metrics
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        streams:
          - id: >-
              kubernetes/metrics-kubernetes.state_container-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_container
            metricsets:
              - state_container
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_cronjob-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_cronjob
            metricsets:
              - state_cronjob
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_daemonset-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_daemonset
            metricsets:
              - state_daemonset
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_deployment-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_deployment
            metricsets:
              - state_deployment
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_job-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_job
            metricsets:
              - state_job
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_node-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_node
            metricsets:
              - state_node
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_persistentvolume-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_persistentvolume
            metricsets:
              - state_persistentvolume
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_persistentvolumeclaim-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_persistentvolumeclaim
            metricsets:
              - state_persistentvolumeclaim
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_pod-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_pod
            metricsets:
              - state_pod
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_replicaset-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_replicaset
            metricsets:
              - state_replicaset
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_resourcequota-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_resourcequota
            metricsets:
              - state_resourcequota
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_service-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_service
            metricsets:
              - state_service
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_statefulset-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_statefulset
            metricsets:
              - state_statefulset
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
          - id: >-
              kubernetes/metrics-kubernetes.state_storageclass-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.state_storageclass
            metricsets:
              - state_storageclass
            add_metadata: true
            hosts:
              - 'kube-state-metrics:8080'
            period: 10s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        meta:
          package:
            name: kubernetes
            version: 1.31.2
      - id: kubernetes/metrics-kube-apiserver-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        revision: 1
        name: k8s_local
        type: kubernetes/metrics
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        streams:
          - id: >-
              kubernetes/metrics-kubernetes.apiserver-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.apiserver
            metricsets:
              - apiserver
            hosts:
              - >-
                https://${env.KUBERNETES_SERVICE_HOST}:${env.KUBERNETES_SERVICE_PORT}
            period: 30s
            condition: '${kubernetes_leaderelection.leader} == true'
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            ssl.certificate_authorities:
              - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        meta:
          package:
            name: kubernetes
            version: 1.31.2
      - id: kubernetes/metrics-kube-proxy-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        revision: 1
        name: k8s_local
        type: kubernetes/metrics
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        streams:
          - id: >-
              kubernetes/metrics-kubernetes.proxy-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.proxy
            metricsets:
              - proxy
            hosts:
              - 'localhost:10249'
            period: 10s
        meta:
          package:
            name: kubernetes
            version: 1.31.2
      - id: kubernetes/metrics-events-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        revision: 1
        name: k8s_local
        type: kubernetes/metrics
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        streams:
          - id: >-
              kubernetes/metrics-kubernetes.event-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
            data_stream:
              type: metrics
              dataset: kubernetes.event
            metricsets:
              - event
            period: 10s
            add_metadata: true
            skip_older: true
            condition: '${kubernetes_leaderelection.leader} == true'
        meta:
          package:
            name: kubernetes
            version: 1.31.2
      - id: filestream-container-logs-2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        revision: 1
        name: k8s_local
        type: filestream
        data_stream:
          namespace: default
        use_output: default
        package_policy_id: 2fdcdf27-e16d-4fc5-b0c1-26f23638fe99
        streams:
          - id: >-
              kubernetes-container-logs-${kubernetes.pod.name}-${kubernetes.container.id}
            data_stream:
              type: logs
              dataset: kubernetes.container_logs
            paths:
              - '/var/log/containers/*${kubernetes.container.id}.log'
            prospector.scanner.symlinks: true
            parsers:
              - container:
                  stream: all
                  format: auto
        meta:
          package:
            name: kubernetes
            version: 1.31.2
    revision: 3
    agent:
      download:
        sourceURI: 'https://artifacts.elastic.co/downloads/'
      monitoring:
        namespace: default
        use_output: default
        enabled: true
        logs: true
        metrics: true
    output_permissions:
      default:
        _elastic_agent_monitoring:
          indices:
            - names:
                - metrics-elastic_agent.auditbeat-default
              privileges: &ref_0
                - auto_configure
                - create_doc
            - names:
                - logs-elastic_agent.apm_server-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.filebeat_input-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.filebeat_input-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.endpoint_security-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.filebeat-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.filebeat-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.heartbeat-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.fleet_server-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.fleet_server-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.endpoint_security-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.elastic_agent-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.cloudbeat-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.cloudbeat-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.apm_server-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.auditbeat-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.heartbeat-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.metricbeat-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.osquerybeat-default
              privileges: *ref_0
            - names:
                - metrics-elastic_agent.packetbeat-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.metricbeat-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.packetbeat-default
              privileges: *ref_0
            - names:
                - logs-elastic_agent.osquerybeat-default
              privileges: *ref_0
        _elastic_agent_checks:
          cluster:
            - monitor
        595b62d6-eda2-4bf6-bc55-4f2cc05160ea:
          indices:
            - names:
                - logs-system.auth-default
              privileges: *ref_0
            - names:
                - logs-system.syslog-default
              privileges: *ref_0
            - names:
                - logs-system.application-default
              privileges: *ref_0
            - names:
                - logs-system.security-default
              privileges: *ref_0
            - names:
                - logs-system.system-default
              privileges: *ref_0
            - names:
                - metrics-system.fsstat-default
              privileges: *ref_0
            - names:
                - metrics-system.diskio-default
              privileges: *ref_0
            - names:
                - metrics-system.process-default
              privileges: *ref_0
            - names:
                - metrics-system.load-default
              privileges: *ref_0
            - names:
                - metrics-system.process.summary-default
              privileges: *ref_0
            - names:
                - metrics-system.memory-default
              privileges: *ref_0
            - names:
                - metrics-system.network-default
              privileges: *ref_0
            - names:
                - metrics-system.filesystem-default
              privileges: *ref_0
            - names:
                - metrics-system.cpu-default
              privileges: *ref_0
            - names:
                - metrics-system.socket_summary-default
              privileges: *ref_0
            - names:
                - metrics-system.uptime-default
              privileges: *ref_0
        2fdcdf27-e16d-4fc5-b0c1-26f23638fe99:
          indices:
            - names:
                - metrics-kubernetes.container-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.node-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.pod-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.system-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.volume-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_container-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_cronjob-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_daemonset-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_deployment-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_job-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_node-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_persistentvolume-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_persistentvolumeclaim-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_pod-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_replicaset-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_resourcequota-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_service-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_statefulset-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.state_storageclass-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.apiserver-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.proxy-default
              privileges: *ref_0
            - names:
                - metrics-kubernetes.event-default
              privileges: *ref_0
            - names:
                - logs-kubernetes.container_logs-default
              privileges: *ref_0

---
# For more information refer https://www.elastic.co/guide/en/fleet/current/running-on-kubernetes-standalone.html
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: elastic-agent
  namespace: kube-system
  labels:
    app: elastic-agent
spec:
  selector:
    matchLabels:
      app: elastic-agent
  template:
    metadata:
      labels:
        app: elastic-agent
    spec:
      # Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
      # Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          effect: NoSchedule
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: elastic-agent
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: elastic-agent
          image: docker.elastic.co/beats/elastic-agent:8.6.2
          args: [
            "-c", "/etc/agent.yml",
            "-e",
          ]
          env:
            # The basic authentication username used to connect to Elasticsearch
            # This user needs the privileges required to publish events to Elasticsearch.
            - name: ES_USERNAME
              value: "elastic"
            # The basic authentication password used to connect to Elasticsearch
            - name: ES_PASSWORD
              value: "password"
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          securityContext:
            runAsUser: 0
          resources:
            limits:
              memory: 700Mi
            requests:
              cpu: 100m
              memory: 400Mi
          volumeMounts:
            - name: datastreams
              mountPath: /etc/agent.yml
              readOnly: true
              subPath: agent.yml
            - name: proc
              mountPath: /hostfs/proc
              readOnly: true
            - name: cgroup
              mountPath: /hostfs/sys/fs/cgroup
              readOnly: true
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
            - name: varlog
              mountPath: /var/log
              readOnly: true
            - name: etc-full
              mountPath: /hostfs/etc
              readOnly: true
            - name: var-lib
              mountPath: /hostfs/var/lib
              readOnly: true
      volumes:
        - name: datastreams
          configMap:
            defaultMode: 0640
            name: agent-node-datastreams
        - name: proc
          hostPath:
            path: /proc
        - name: cgroup
          hostPath:
            path: /sys/fs/cgroup
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: varlog
          hostPath:
            path: /var/log
        # The following volumes are needed for Cloud Security Posture integration (cloudbeat)
        # If you are not using this integration, then these volumes and the corresponding
        # mounts can be removed.
        - name: etc-full
          hostPath:
            path: /etc
        - name: var-lib
          hostPath:
            path: /var/lib
---

********** DaemonSet_end************
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: elastic-agent
subjects:
  - kind: ServiceAccount
    name: elastic-agent
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: elastic-agent
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: kube-system
  name: elastic-agent
subjects:
  - kind: ServiceAccount
    name: elastic-agent
    namespace: kube-system
roleRef:
  kind: Role
  name: elastic-agent
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: elastic-agent-kubeadm-config
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: elastic-agent
    namespace: kube-system
roleRef:
  kind: Role
  name: elastic-agent-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: elastic-agent
  labels:
    k8s-app: elastic-agent
rules:
  - apiGroups: [""]
    resources:
      - nodes
      - namespaces
      - events
      - pods
      - services
      - configmaps
      # Needed for cloudbeat
      - serviceaccounts
      - persistentvolumes
      - persistentvolumeclaims
    verbs: ["get", "list", "watch"]
  # Enable this rule only if planing to use kubernetes_secrets provider
  #- apiGroups: [""]
  #  resources:
  #  - secrets
  #  verbs: ["get"]
  - apiGroups: ["extensions"]
    resources:
      - replicasets
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources:
      - statefulsets
      - deployments
      - replicasets
      - daemonsets
    verbs: ["get", "list", "watch"]
  - apiGroups: ["batch"]
    resources:
      - jobs
      - cronjobs
    verbs: ["get", "list", "watch"]
  - apiGroups:
      - ""
    resources:
      - nodes/stats
    verbs:
      - get
  # Needed for apiserver
  - nonResourceURLs:
      - "/metrics"
    verbs:
      - get
  # Needed for cloudbeat
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources:
      - clusterrolebindings
      - clusterroles
      - rolebindings
      - roles
    verbs: ["get", "list", "watch"]
  # Needed for cloudbeat
  - apiGroups: ["policy"]
    resources:
      - podsecuritypolicies
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: elastic-agent
  # Should be the namespace where elastic-agent is running
  namespace: kube-system
  labels:
    k8s-app: elastic-agent
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: elastic-agent-kubeadm-config
  namespace: kube-system
  labels:
    k8s-app: elastic-agent
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elastic-agent
  namespace: kube-system
  labels:
    k8s-app: elastic-agent
---

In the elasticsearch logs there are tls warnings; have these been set up correctly?

Are there any errors in the agent logs that can help?

There is elastic TLS setup from yaml config file. https on port 9200 working correctly.

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["elastic-local"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

And, this is last agent log on k8s master node.

2023-03-01T23:03:38.412190516+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-01T22:03:38.410Z","message":"kubernetes: Node kube-master discovered by NODE_NAME environment variable","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"ecs.version":"1.6.0","log.logger":"kubernetes","log.origin":{"file.line":146,"file.name":"kubernetes/util.go"},"service.name":"filebeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0"}
2023-03-01T23:03:38.683541151+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-01T22:03:38.683Z","message":"Failed to connect to backoff(elasticsearch(https://10.0.87.200:9200)): Get \"https://10.0.87.200:9200\": x509: certificate signed by unknown authority","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"beat/metrics-monitoring","type":"beat/metrics"},"log":{"source":"beat/metrics-monitoring"},"log.logger":"publisher_pipeline_output","log.origin":{"file.line":150,"file.name":"pipeline/client_worker.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-01T23:03:38.68356257+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-01T22:03:38.683Z","message":"Attempting to reconnect to backoff(elasticsearch(https://10.0.87.200:9200)) with 53 reconnect attempt(s)","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"beat/metrics-monitoring","type":"beat/metrics"},"log":{"source":"beat/metrics-monitoring"},"ecs.version":"1.6.0","log.logger":"publisher_pipeline_output","log.origin":{"file.line":141,"file.name":"pipeline/client_worker.go"},"service.name":"metricbeat","ecs.version":"1.6.0"}
2023-03-01T23:03:38.695963981+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-01T22:03:38.695Z","message":"Error dialing x509: certificate signed by unknown authority","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"beat/metrics-monitoring","type":"beat/metrics"},"log":{"source":"beat/metrics-monitoring"},"service.name":"metricbeat","network":"tcp","ecs.version":"1.6.0","log.logger":"esclientleg","log.origin":{"file.line":38,"file.name":"transport/logging.go"},"address":"10.0.87.200:9200","ecs.version":"1.6.0"}
2023-03-01T23:03:39.064503276+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-01T22:03:39.059Z","message":"Failed to connect to backoff(elasticsearch(https://10.0.87.200:9200)): Get \"https://10.0.87.200:9200\": x509: certificate signed by unknown authority","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"publisher_pipeline_output","log.origin":{"file.line":150,"file.name":"pipeline/client_worker.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-01T23:03:39.064517973+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-01T22:03:39.059Z","message":"Attempting to reconnect to backoff(elasticsearch(https://10.0.87.200:9200)) with 4 reconnect attempt(s)","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"publisher_pipeline_output","log.origin":{"file.line":141,"file.name":"pipeline/client_worker.go"},"service.name":"filebeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-01T23:03:39.071606265+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-01T22:03:39.069Z","message":"Error dialing x509: certificate signed by unknown authority","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.origin":{"file.line":38,"file.name":"transport/logging.go"},"service.name":"filebeat","network":"tcp","log.logger":"esclientleg","address":"10.0.87.200:9200","ecs.version":"1.6.0","ecs.version":"1.6.0"}

You need to specify the elasticsearch CA in the agent output

OK, thank You, but can You tell me how (and where) do I do that?

The certs can either be specified as paths, or you can inline the cert contents.

Optionally, you could also use the ssl.ca_trusted_fingerprint option instead to give the fingerprint of the CA that Elasticsearch uses

OK, thank You. So, that's actually part of the Elastic agent's DeamonSet yaml on k8s master node. I'll try it right away tomorrow morning. I'll definitely get back to you.

Thank You again!

I did everything as you said, but I still don't see the indexes in Elasticsearch. Some metrics have appeared and are now available in Kibana. But the logs from the K8s cluster are not. Below I am sending you the logs from the agent and from elastic.

If you have time, I would like to ask you to take a look. Thank you very much!

2023-03-02T15:31:55.130057464+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:31:55.129Z","message":"Non-zero metrics in the last 30s","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","monitoring":{"ecs.version":"1.6.0","metrics":{"beat":{"cpu":{"system":{"ticks":240,"time":{"ms":20}},"total":{"ticks":1050,"time":{"ms":60},"value":1050},"user":{"ticks":810,"time":{"ms":40}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":24},"info":{"ephemeral_id":"3779510e-bff1-4b6f-ac50-988650eb6fa4","uptime":{"ms":90059},"version":"8.6.2"},"memstats":{"gc_next":115307208,"memory_alloc":78528160,"memory_sys":4194304,"memory_total":232196104,"rss":207409152},"runtime":{"goroutines":546}},"filebeat":{"events":{"active":4104},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":14}},"output":{"events":{"active":0}},"pipeline":{"clients":10,"events":{"active":4104}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":1.33,"15":1.43,"5":1.34,"norm":{"1":0.665,"15":0.715,"5":0.67}}}}},"log.logger":"monitoring","log.origin":{"file.line":187,"file.name":"log/log.go"},"ecs.version":"1.6.0"}
2023-03-02T15:31:57.56932724+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:31:57.568Z","message":"add_cloud_metadata: hosting provider type not detected.","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":102,"file.name":"add_cloud_metadata/add_cloud_metadata.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:31:57.664360193+01:00 stderr F {"log.level":"warn","@timestamp":"2023-03-02T14:31:57.663Z","message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":81,"file.name":"add_cloud_metadata/provider_aws_ec2.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:31:57.667750671+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:31:57.665Z","message":"add_kubernetes_metadata: kubernetes env detected, with version: v1.26.0","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.origin":{"file.line":73,"file.name":"add_kubernetes_metadata/kubernetes.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:31:57.669393079+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:31:57.668Z","message":"kubernetes: Node kube-master discovered by NODE_NAME environment variable","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.logger":"kubernetes","log.origin":{"file.line":146,"file.name":"kubernetes/util.go"},"service.name":"metricbeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:31:58.347517355+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-02T14:31:58.347Z","message":"W0302 14:31:58.347329  122200 reflector.go:324] k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:serviceaccount:kube-system:elastic-agent\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"ecs.version":"1.6.0"}
2023-03-02T15:31:58.347539176+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-02T14:31:58.347Z","message":"E0302 14:31:58.347349  122200 reflector.go:138] k8s.io/client-go@v0.23.4/tools/cache/reflector.go:167: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:serviceaccount:kube-system:elastic-agent\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"ecs.version":"1.6.0"}
2023-03-02T15:31:59.758448282+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:31:59.758Z","message":"Non-zero metrics in the last 30s","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"service.name":"metricbeat","monitoring":{"ecs.version":"1.6.0","metrics":{"beat":{"cpu":{"system":{"ticks":210,"time":{"ms":90}},"total":{"ticks":1070,"time":{"ms":520},"value":1070},"user":{"ticks":860,"time":{"ms":430}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":26},"info":{"ephemeral_id":"39c29aea-6ee8-4016-9d00-c57f842fe3f7","uptime":{"ms":60078},"version":"8.6.2"},"memstats":{"gc_next":72003128,"memory_alloc":35258432,"memory_sys":21561344,"memory_total":163202752,"rss":182067200},"runtime":{"goroutines":1092}},"libbeat":{"config":{"module":{"running":0,"starts":9}},"output":{"events":{"active":0}},"pipeline":{"clients":18,"events":{"active":415,"published":373,"retry":10,"total":374}}},"metricbeat":{"kubernetes":{"node":{"events":1,"success":1},"pod":{"events":2,"success":3},"proxy":{"events":18,"success":18},"state_daemonset":{"events":30,"success":30},"state_deployment":{"events":57,"success":57},"state_job":{"events":3,"success":3},"state_node":{"events":9,"success":9},"state_persistentvolume":{"events":6,"success":6},"state_persistentvolumeclaim":{"events":6,"success":6},"state_pod":{"events":148,"success":148},"state_service":{"events":54,"success":54},"state_statefulset":{"events":2,"success":2},"system":{"events":9,"success":9},"volume":{"events":30,"success":30}}},"system":{"load":{"1":1.3,"15":1.43,"5":1.33,"norm":{"1":0.65,"15":0.715,"5":0.665}}}}},"log.logger":"monitoring","log.origin":{"file.line":187,"file.name":"log/log.go"},"ecs.version":"1.6.0"}
2023-03-02T15:31:59.898367271+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-02T14:31:59.897Z","message":"Failed to connect to backoff(elasticsearch(https://10.0.87.200:9200)): Get \"https://10.0.87.200:9200\": x509: certificate signed by unknown authority","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.logger":"publisher_pipeline_output","log.origin":{"file.line":150,"file.name":"pipeline/client_worker.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:31:59.898390704+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:31:59.897Z","message":"Attempting to reconnect to backoff(elasticsearch(https://10.0.87.200:9200)) with 5 reconnect attempt(s)","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.origin":{"file.line":141,"file.name":"pipeline/client_worker.go"},"service.name":"metricbeat","ecs.version":"1.6.0","log.logger":"publisher_pipeline_output","ecs.version":"1.6.0"}
2023-03-02T15:31:59.908486581+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-02T14:31:59.908Z","message":"Error dialing x509: certificate signed by unknown authority","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"address":"10.0.87.200:9200","log.logger":"esclientleg","service.name":"metricbeat","network":"tcp","ecs.version":"1.6.0","log.origin":{"file.line":38,"file.name":"transport/logging.go"},"ecs.version":"1.6.0"}
2023-03-02T15:32:00.666139816+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:32:00.665Z","message":"add_cloud_metadata: hosting provider type not detected.","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":102,"file.name":"add_cloud_metadata/add_cloud_metadata.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:32:00.674621176+01:00 stderr F {"log.level":"warn","@timestamp":"2023-03-02T14:32:00.672Z","message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":81,"file.name":"add_cloud_metadata/provider_aws_ec2.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:32:00.678627063+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:32:00.676Z","message":"add_kubernetes_metadata: kubernetes env detected, with version: v1.26.0","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.origin":{"file.line":73,"file.name":"add_kubernetes_metadata/kubernetes.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
2023-03-02T15:32:00.679242874+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:32:00.679Z","message":"kubernetes: Node kube-master discovered by NODE_NAME environment variable","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"log.origin":{"file.line":146,"file.name":"kubernetes/util.go"},"service.name":"metricbeat","libbeat.processor":"add_kubernetes_metadata","ecs.version":"1.6.0","log.logger":"kubernetes","ecs.version":"1.6.0"}
2023-03-02T15:32:01.073736918+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:32:01.073Z","message":"Non-zero metrics in the last 30s","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-monitoring","type":"filestream"},"log":{"source":"filestream-monitoring"},"log.logger":"monitoring","log.origin":{"file.line":187,"file.name":"log/log.go"},"service.name":"filebeat","monitoring":{"ecs.version":"1.6.0","metrics":{"beat":{"cpu":{"system":{"ticks":730},"total":{"ticks":3440,"value":3440},"user":{"ticks":2710}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":19},"info":{"ephemeral_id":"3caa2974-0891-48da-8750-de28a8b74d8e","uptime":{"ms":5550056},"version":"8.6.2"},"memstats":{"gc_next":78652616,"memory_alloc":39544696,"memory_total":284397232,"rss":172777472},"runtime":{"goroutines":77}},"filebeat":{"events":{"active":4105},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":1}},"output":{"events":{"active":0}},"pipeline":{"clients":8,"events":{"active":4105}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":1.28,"15":1.43,"5":1.32,"norm":{"1":0.64,"15":0.715,"5":0.66}}}}},"ecs.version":"1.6.0"}
2023-03-02T15:32:01.164064321+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-02T14:32:01.163Z","message":"Failed to connect to backoff(elasticsearch(https://10.0.87.200:9200)): Get \"https://10.0.87.200:9200\": x509: certificate signed by unknown authority","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","ecs.version":"1.6.0","log.logger":"publisher_pipeline_output","log.origin":{"file.line":150,"file.name":"pipeline/client_worker.go"},"ecs.version":"1.6.0"}
2023-03-02T15:32:01.164103696+01:00 stderr F {"log.level":"info","@timestamp":"2023-03-02T14:32:01.163Z","message":"Attempting to reconnect to backoff(elasticsearch(https://10.0.87.200:9200)) with 6 reconnect attempt(s)","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"service.name":"filebeat","ecs.version":"1.6.0","log.logger":"publisher_pipeline_output","log.origin":{"file.line":141,"file.name":"pipeline/client_worker.go"},"ecs.version":"1.6.0"}
2023-03-02T15:32:01.174090748+01:00 stderr F {"log.level":"error","@timestamp":"2023-03-02T14:32:01.173Z","message":"Error dialing x509: certificate signed by unknown authority","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"filestream-default","type":"filestream"},"log":{"source":"filestream-default"},"log.logger":"esclientleg","log.origin":{"file.line":38,"file.name":"transport/logging.go"},"service.name":"filebeat","network":"tcp","address":"10.0.87.200:9200","ecs.version":"1.6.0","ecs.version":"1.6.0"}

[2023-03-02T15:48:12,496][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.87.200:9200, remoteAddress=/10.0.87.101:49090}
[2023-03-02T15:48:13,419][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/10.0.87.200:9200, remoteAddress=/10.0.87.100:41088}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:358) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:286) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:519) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:458) ~[?:?]
        ... 16 more
[2023-03-02T15:48:14,863][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.87.200:9200, remoteAddress=/10.0.87.200:42996}
[2023-03-02T15:48:15,051][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.87.200:9200, remoteAddress=/10.0.87.102:60580}
apiVersion: v1
kind: ConfigMap
metadata:
  name: agent-node-datastreams
  namespace: kube-system
  labels:
    k8s-app: elastic-agent
data:
  agent.yml: |-
    id: 057711e0-b2a9-11ed-8ff5-89f256e4633f
    outputs:
      default:
        type: elasticsearch
        hosts:
          - 'https://10.0.87.200:9200'
        username: '${ES_USERNAME}'
        password: '${ES_PASSWORD}'
        ssl.certificate_authorities: [/home/tgolubic/elastic/http_ca.pem]




There's still CA validation errors in the agent log, are you sure that the agent has access to the cert file? Maybe it's easier to inline

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.