Combine separated JSON logs in k8s

Hi all,

We are using a filebeat daemon set with autodisocover to scrape all container logs which are all in the JSON logstash format. Using docker container runtime, the docker json-file driver splits bigger logs into 16kb parts, therefore the JSON is not valid and could not be parsed by the filebeat processor. Is it possible to use something like ‘combine_partial’ from the docker type filebeat inputs to archive a valid JSON?

This is the filebeat (partly) configuration we use

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints:
            enabled: true
            default_config:
              type: container
              harvester_buffer_size: 32768
              containers.ids:
                - "${data.kubernetes.container.id}"
              enabled: false
              paths:
                - /var/log/containers/*${data.kubernetes.container.id}.log

Thanks a lot

Dennis

Hi @Dennis_Haseloff !

I'm not sure if this is doable but maybe you can try using format option along with multiline. Please see Container input | Filebeat Reference [master] | Elastic for more information.

C.