Filebeat fails to process kibana json logs "failed to format message from *json-.log "in a kubernetes enviroment


(Juan Pablo Quiroga) #1

So ive mounted ELK stack with filebeat in a kubernetes enviroment, im parsing all the logs correctly, only problem is the kibana json-logs format that get error

failed to format message from /var/lib/docker/containers/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba-json.log

So did a kubectl describe pods and realized that docker container was kibana. Version 6.5.2

Filebeat configuration:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false
    processors:
      - add_cloud_metadata:
      - drop_fields:
          when:
            has_fields: ['kubernetes.labels.app']
          fields:
            - 'kubernetes.labels.app'
    output.elasticsearch:
      hosts: ['http://elasticsearch.whitenfv.svc.cluster.local:9200']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      json.keys_under_root: false
      json.add_error_key: false
      json.ignore_decoding_error: true
      containers.ids:
        - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: {{ filebeat_image_full }}
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat

(Sonja Krause Harder) #2

Hi @paltaa,

thank you for trying out the Logs UI and reporting this problem!

We're aware that the parsing and display of log messages needs improving. The work that's currently done for that is tracked in https://github.com/elastic/kibana/issues/26759 .

cheers,
Sonja


(Chris Cowan) #3

Can you post an sample of the event document JSON from that was indexed into Elasticsearch?

Go to Discover:

  1. filter by source:"/var/lib/docker/containers/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba-json.log"
  2. click on one of the records
  3. click on the JSON tab
  4. copy the source.

Make sure to xxxx out any sensitive data.


(Juan Pablo Quiroga) #4
{
  "_index": "filebeat-6.5.2-2019.01.09",
  "_type": "doc",
  "_id": "WoPCM2gBPXD5Ivx45Ivl",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2019-01-09T17:57:16.136Z",
    "offset": 7033262,
    "input": {
      "type": "docker"
    },
    "host": {
      "name": "filebeat-f7hqh"
    },
    "beat": {
      "version": "6.5.2",
      "name": "filebeat-f7hqh",
      "hostname": "filebeat-f7hqh"
    },
    "meta": {
      "cloud": {
        "instance_id": "i-000000d5",
        "machine_type": "m1.large",
        "instance_name": "whitenfv-jptest-3.novalocal",
        "availability_zone": "nova",
        "provider": "openstack"
      }
    },
    "json": {
      "method": "post",
      "statusCode": 200,
      "req": {
        "method": "post",
        "headers": {
          "content-type": "application/x-ndjson",
          "accept-encoding": "gzip, deflate",
          "accept-language": "en-US,en;q=0.9,es;q=0.8,fr;q=0.7",
          "content-length": "1091",
          "accept": "application/json, text/plain, */*",
          "origin": "http://xxxx",
          "kbn-version": "6.5.2",
          "referer": "http://xxxxapp/kibana",
          "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
          "host": "198.204.227.93:30001",
          "connection": "keep-alive"
        },
        "remoteAddress": "10.233.64.0",
        "userAgent": "10.233.64.0",
        "referer": "xxxx/app/kibana",
        "url": "/elasticsearch/_msearch"
      },
      "type": "response",
      "@timestamp": "2019-01-09T17:57:14Z",
      "res": {
        "contentLength": 9,
        "statusCode": 200,
        "responseTime": 1444
      },
      "message": "POST /elasticsearch/_msearch 200 1444ms - 9.0B",
      "tags": [],
      "pid": 1
    },
    "source": "/var/lib/docker/containers/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba-json.log",
    "stream": "stdout",
    "prospector": {
      "type": "docker"
    },
    "kubernetes": {
      "replicaset": {
        "name": "kibana-865c55468"
      },
      "labels": {
        "pod-template-hash": "865c55468"
      },
      "pod": {
        "name": "kibana-865c55468-vqb78"
      },
      "node": {
        "name": "node3"
      },
      "container": {
        "name": "kibana"
      },
      "namespace": "whitenfv"
    }
  },
  "fields": {
    "@timestamp": [
      "2019-01-09T17:57:16.136Z"
    ]
  },
  "highlight": {
    "source": [
      "@kibana-highlighted-field@/var/lib/docker/containers/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba/b685d94ec5e83c08cbe7728bcc9ebc3827cf2015c25490fb8d62e5c16c12b8ba-json.log@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1547056636136
  ]
}

(Chris Cowan) #5

This PR outlines the setting for the Infrastructure and Logging UI's: https://github.com/elastic/kibana/pull/26579

You can add xpack.infra.sources.default.fields.message: ['message', '@message', 'json.message'] to config/kibana.yml config file. By default the UI only looks at ['message', '@message'] so you just need to tell the UI where the message field is.

Update: Corrected config.kibana.yml to config/kibana.yml


(Juan Pablo Quiroga) #6

In a kubernetes enviroment, where should i add it ?


(Chris Cowan) #7

Whoops... config.kibana.yml was suppose to be config/kibana.yml. It should go in your Kibana config file, where ever that's being read from. I'm not very familiar with how Kibana is deployed via Kubernetes.


(Juan Pablo Quiroga) #8

Okay for future reference, it should be done with config maps in a kubernetes enviroment:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kibana
  namespace: the-project
  labels:
    app: kibana
data:
  # kibana.yml is mounted into the Kibana container
  # see https://github.com/elastic/kibana/blob/master/config/kibana.yml
  # Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
  kibana.yml: |-
    server.name: kib.the-project.d4ldev.txn2.com
    server.host: "0"
    elasticsearch.url: http://elasticsearch:9200
    xpack.infra.sources.default.fields.message: ['message', '@message', 'json.message']

(Chris Cowan) #9

Awesome! Did that config setting work for you?


(Juan Pablo Quiroga) #10

Im testing, will get back to you very soon


(Juan Pablo Quiroga) #11

Nope, it didnt work


(Chris Cowan) #12

Bummer... Isn't the log message located at json.message relative to the _source attribute? I'm assuming you restarted Kibana? just double checking :smiley:


(Juan Pablo Quiroga) #13

Yes kibana restarted, anyways as the log is from kibana itself and just api logs that are not entirely important, could it be possible to leave them out, or maybe do changes in the filebeat config ?


(Chris Cowan) #14

It should work out of the box :frowning: I was hoping that quick fix would do it but it looks like I'm gonna have to roll my sleeves up. I'll set up the same Filebeat importer on my system and get back to you in a bit. Either I will be able to get it working (as described above) or I will have to open a PR specifically for this use case. I will leave an update here either way :+1:


(Juan Pablo Quiroga) #15

thanks alot!


(Juan Pablo Quiroga) #16

wait a second i missed something, ill get back to you in a few minutes