Kubernetes/Filebeat - How to Handle JSON Logging for some containers


I understand the basic premise, I need to configure auto discover, and then configure different filters within that, to specify how to handle the logs. I have not been able to find a working example.

We have multiple deployments, deployments that log to JSON, have 'logging: json' set as a label, and an annotation. So that should be an easy way to identify, that I need to handle these logs as JSON.

Our current filebeat.yaml

setup.dashboards.enabled: true
  enabled: false
    - type: kubernetes
      host: ${NODE_NAME}
      hints.enabled: true
        type: container
          - /var/log/containers/*${data.kubernetes.container.id}.log
        exclude_lines: ["^\\s+[\\-`('.|_]"]
    - add_kubernetes_metadata:
    - add_cloud_metadata:

Example Log App 1:

{"time":1577127646.304000000,"level":"INFO","logger":"net.idauto.arms.gmm.import2.GroupImportEngine","thread":"main","message":"Starting the Group Import Engine..."}

Example Log App 2:
{"level":"info","msg":"found role","pod.iam.role":"arn:aws:iam::xxxxxx:role/rolename","pod.ip":"","time":"2019-12-23T18:58:21Z"}

Complicated by the fact we run linkerd, so every pod, has a sidecar container, that does not log json

Example Sidecar log:

INFO [ 0.002965s] linkerd2_proxy::app::main using identity service at Name(NameAddr { name: "linkerd-identity.linkerd.svc.cluster.local", port: 8080 })

It would be such a help to our workflow, if we could figure out, how to properly parse/send JSON to our elasticcloud deployment

For the pods that have containers, that do output JSON, we would/could also know the container name ahead of time, if that would help differentiate it from the linkerd-proxy container.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.