Kubernetes integration not shipping container logs

Hi,

I have a K3S cluster running and installed the Elastic Agent on a node to test the shipping of logs and metrics from the cluster. After some fiddling I got most of the metrics (with kube-state-metrics) running. My problem is that the Elastic Agent won't send the container logs, no matter what I tried. I can't even see any entry, that it tries to open the container log files or fails to do so, in the Elastic Agent logs.
I would be happy about any pointers for what might be the cause of this!

Here the corresponding config of the Elastic Agent and Agent Policy, it is pretty default:

- data_stream:
    namespace: default
  id: filestream-container-logs-7c7678d0-f10a-11ee-9a2e-d107417df48e
  meta:
    package:
      name: kubernetes
      version: 1.58.0
  name: kubernetes-1
  package_policy_id: 7c7678d0-f10a-11ee-9a2e-d107417df48e
  revision: 32
  streams:
  - data_stream:
      dataset: kubernetes.container_logs
    id: kubernetes-container-logs-${kubernetes.pod.name}-${kubernetes.container.id}
    parsers:
    - container:
        format: auto
        stream: all
    paths:
    - /var/log/containers/*${kubernetes.container.id}.log
    processors:
    - add_fields:
        fields:
          annotations:
            elastic_co/dataset: ${kubernetes.annotations.elastic.co/dataset|""}
            elastic_co/namespace: ${kubernetes.annotations.elastic.co/namespace|""}
            elastic_co/preserve_original_event: ${kubernetes.annotations.elastic.co/preserve_original_event|""}
        target: kubernetes
    - drop_fields:
        fields:
        - kubernetes.annotations.elastic_co/dataset
        ignore_missing: true
        when:
          equals:
            kubernetes:
              annotations:
                elastic_co/dataset: ""
    - drop_fields:
        fields:
        - kubernetes.annotations.elastic_co/namespace
        ignore_missing: true
        when:
          equals:
            kubernetes:
              annotations:
                elastic_co/namespace: ""
    - drop_fields:
        fields:
        - kubernetes.annotations.elastic_co/preserve_original_event
        ignore_missing: true
        when:
          equals:
            kubernetes:
              annotations:
                elastic_co/preserve_original_event: ""
    - add_tags:
        tags:
        - preserve_original_event
        when:
          and:
          - has_fields:
            - kubernetes.annotations.elastic_co/preserve_original_event
          - regexp:
              kubernetes:
                annotations:
                  elastic_co/preserve_original_event: ^(?i)true$
    prospector:
      scanner:
        symlinks: true
  type: filestream
  use_output: 24bae780-b9d7-11ee-a766-9bf6e308e496

I found the problem. The Kubernetes provider couldn't find my kubeconfig file, so it couldn't get the necessary information about the cluster. Sadly, in the documentation to the integration there is no info that you need to configure the provider, not even that it exists.

hey could you provide any futher detail on the solution? i'm facing the same problem. how did you manage to make the Kubernetes provider find the kubeconfig file?

Hey,

if I recall it correctly my problem was that the Elastic Agent couldn't connect to my Kubernetes Cluster API because it didn't find any credentials. As I understood it, if you run the Elastic Agent as a pod inside the cluster, it would usually find the token and certificate mounted in these locations:
/var/run/secrets/kubernetes.io/serviceaccount/token
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Since I ran the Elastic Agent out of the cluster on the hosts themselves, it needed a Kubeconfig file to know how to connect. By default the KUBECONFIG environment variable would be checked. The Elastic Agent service doesn't set this by default though. So I added it to /etc/sysconfig/elastic-agent (on Debian):

# cat /etc/sysconfig/elastic-agent
KUBECONFIG=/etc/elastic-agent/kubeconfig.yml

And put a working Kubeconfig file at that location.

Here is the documentation for the Kubernetes provider:

I hope this helps somehow