Ingest mixed container logs with text and JSON [filebeat][docker]

Hi all,

we are currently using plain text logging and import container output with filebeat into elastic and kibana.

We would like to migrate to structured logging with JSON. Is a bit-by-bit migration possible, can I have text log and JSON in the container output and still see both in elastic/kibana?

Regards
bluepuma

Yep, that's possible using two different templates for the same container and route logs based on whether or not they start with {.

Here's an example of how it should work with k8s:

filebeat.autodiscover:
  providers:
    - type: kubernetes
      node: ${NODE_NAME}
      templates:
        - condition:
            contains:
              kubernetes.container.name: "my-app-1"
          config:
            - type: container
              paths:
                - "/var/log/containers/*-${data.kubernetes.container.id}.log"
              include_lines: ['^{']
              json.keys_under_root: true
              json.overwrite_keys: true
              json.add_error_key: true
              json.expand_keys: true
        - condition:
            contains:
              kubernetes.container.name: "my-app-1"
          config:
            - type: container
              paths:
                - "/var/log/containers/*-${data.kubernetes.container.id}.log"
              multiline.pattern: '^[[:blank:]]'
              multiline.negate: false
              multiline.match: after
              exclude_lines: ['^{']

Disclaimer: I have not tested the config. Please let me know if it works for you or if you had to make adjustments.

We don't use kubernetes and have about 50 docker containers, so 2 configurations per container is not really feasible.

@felixbarny: Feature request: new config option json.add_error_content: key

If a line can not be JSON-parsed (error), then it is just added as text under key.

You can either set up your conditions so that they match on a particular label or add multiple conditions to the same template for all of your apps: Multiple conditions with autodiscover & docker containers - #2 by steffens

There's also a docker-based autodiscover so the k8s-style autodiscover example above would look like this for docker:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.labels: "log-format-json-and-text"
          config:
            - type: container
              paths:
                - /var/lib/docker/containers/${data.docker.container.id}/*.log
              include_lines: ['^{']
              json.keys_under_root: true
              json.overwrite_keys: true
              json.add_error_key: true
              json.expand_keys: true
        - condition:
            contains:
              docker.container.labels: "log-format-json-and-text"
          config:
            - type: container
              paths:
                - /var/lib/docker/containers/${data.docker.container.id}/*.log
              exclude_lines: ['^{']
              multiline.pattern: '^[[:blank:]]'
              multiline.negate: false
              multiline.match: after

See Autodiscover | Filebeat Reference [8.11] | Elastic for more info about autodiscover.

What's not working with json.add_error_key: true? Is the unparseable JSON under the wrong key or is it completely absent from the indexed document?

Interesting, after removing json.message_key it just works for me :smiley:

docker run \
  --rm \
  --name testXYZ \
  --label co.elastic.logs/enabled=true \
  --label co.elastic.logs/json.keys_under_root=true \
  --label co.elastic.logs/json.add_error_key=false \
  python:3-slim python -c 'import time; print("{\"message\":\"testX\", \"session\":\"testY\"}\ntestZ", flush=True); time.sleep(10);'

From log:

    {"message":"testX", "session":"testY"}
    testZ

To elastic:

    { "message": "testX", "session": "testY", ... }
    { "message": "testZ", ... }

For a JSON line the message is set, additional attributes, too. And for text just the message.
:+1:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.