Filebeat not parsing json in messages

I have deployed filebeat as a daemonset in Kubernetes for collecting logs and below is my filebeat configuration:

 - type: log
       paths:
         - /var/lib/docker/containers/*/*.log
       multiline.pattern: "^[[:space:]]+(at|\\.{3})\\b|^Caused by:|^[[:space:]]"
       multiline.negate: false
       multiline.match: after
       json.message_key: log
       json.keys_under_root: true
       processors:
         - add_kubernetes_metadata:
             in_cluster: true
             namespace: ${POD_NAMESPACE}
         - decode_json_fields:
             fields: ["request","response"]

My logs look like this:

{"uri":"x","request":{...}, "response":{...},"data":"abcd"}
{"uri":"y","request":{...}, "response":{...},"data":"xyz"}

Can you post your complete filebeat configuration? The indentation looks somewhat off in your code snippet.

Why do you have multiline and json in one type?

It's the top level JSON not be parsed or the embedded 'request', 'response' fields?

I have downloaded the official manifest file provided by elastic and made changes in prospector file:

 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: filebeat-prospectors
   namespace: kube-system
   labels:
     k8s-app: filebeat
     kubernetes.io/cluster-service: "true"
 data:
   kubernetes.yml: |-
     - type: log
       paths:
         - /var/lib/docker/containers/*/*.log
       multiline.pattern: "^[[:space:]]+(at|\\.{3})\\b|^Caused by:|^[[:space:]]"
       multiline.negate: false
       multiline.match: after
       json.message_key: log
       json.keys_under_root: true
       processors:
         - add_kubernetes_metadata:
             in_cluster: true
             namespace: ${POD_NAMESPACE}
         - drop_event:
             when:
               or:
                 - equals:
                     kubernetes.namespace: "kube-system"
                 - equals:
                     stream: "stderr"

This is my complete configmap manifest. I have added multiline for catching excptions. Do these need to be in different type? If yes then how coz the log files path will remain the same, wont that be a problem?

And, nothing is getting parsed as of now. The whole json is coming as it is in elasticsearch inside a field "log".

Ok. Do you have matching config in logstash (or elastic) to catch those documents and interpret the contents of the log field as json and break out the fields?

Why logstash? From my understanding of the docs, i just need to deploy filebeat to my kubernetes cluster as a daemon set, and if the logs have json in separate lines, filebeat will automatically be able to parse it and send to elasticsearch with respective fields.

Here is a snapshot from the docs:

Oh, I see. You try to forward the container logs. As docker can be somewhat painful, especially regarding multiline and when to do JSON parsing. For this use case filebeat introduced the docker prospector type.

e.g.

- type: docker
  containers:
    ids: "*"
    path: /var/lib/docker/containers
    stream: all
  multiline:
    pattern: ...
    ...
  processors:
    ...

The docker log type parses the docker JSON before executing any other transformation/parsing. See containers in docs.

Maybe you want to start step by step. First be able to publish logs. Then add multiline for stack traces. Then add drop_event.

1 Like

Yes, beats 6.x has some built in parsing. If it works for you, great, but
unless you only have one data source (for you, docker instances), it won't
integrate with anything else.

I'm all for processing the data as close to the source as possible, but I
haven't seen a way to configure that processing. Also, in my case, I don't
control the endpoints and have no way of updating that code, so I do
everything in logstash.

Also, the module filters have really bad field names.

Ok let me try this out. I will get back to you on this.

Hey, i did two things:

  1. Upgraded to beats 6.2.4
  2. Used docker type for prospector and added the following line:
    json.keys_under_root: true

Works now. Thanks for the help, and quick responses :slight_smile:

Hi,

So, my orginal issue was solved, but I can see that CPU consumption for filebeat spikes extremely high. Is this a known issue? or it is a problem in version 6.2.4?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.