Filebeat on AWS EKS worker node

I'm an Elastic Cloud subscriber. We are standing up an EKS cluster on AWS, but would like to have filebeat exist outside of Kubernetes, directly on the worker node.

I'm having some trouble understanding what to put in the kube_config setting as there isn't a kube_config on the worker node that I know of and certainly not at ${HOME}/.kube/config

Here is my filebeat.inputs


- type: log
  enabled: true
    - /var/log/*.log

- type: docker
  combine_partial: true
    path: "/var/lib/docker/containers"
      - "*"
  json.keys_under_root: true
  json.add_error_key: true
      - "container_logs_nonprod"
      - "eks_services"
  - add_kubernetes_metadata:
      in_cluster: false
      host: i-[REDACTED]
      kube_config: ${HOME}/.kube/config

Is there any reason you are setting json.* in the Docker input? Filebeat takes care of parsing the messages from the JSON coming from Docker. You only need that option if your logs are JSON under the key log. Is that the case?

Also, if you are a customer, you can submit a support ticket to get an answer.

The JSON configuration was not really the issue I was asking about, but in any case. I added that configuration assuming that I had to. Are you saying that if the logs are one JSON object per line with no root key, then it needs no further configuration?

Also, what about the kubeconfig? Do I have to deploy a kubeconfig with RBAC permissions for a filebeat service account to use or something?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.