Problem with decode_json_fields

I have ECK setup and im using filebeat to ship logs from Kubernetes to elasticsearch.

Ive recently added decode_json_fields processor to my configuration, so that im able decode the json that is usually in the message field.

      - decode_json_fields:
          fields: ["message"]
          process_array: false
          max_depth: 10
          target: "log"
          overwrite_keys: true
          add_error_key: true

However logs have stopped appearing since adding it.

example log:

{
  "_index": "filebeat-7.9.1-2020.10.01-000001",
  "_type": "_doc",
  "_id": "wF9hB3UBtUOF3QRTBcts",
  "_score": 1,
  "_source": {
    "@timestamp": "2020-10-08T08:43:18.672Z",
    "kubernetes": {
      "labels": {
        "controller-uid": "9f3f9d08-cfd8-454d-954d-24464172fa37",
        "job-name": "stream-hatchet-cron-manual-rvd"
      },
      "container": {
        "name": "stream-hatchet-cron",
        "image": "<redacted>.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4"
      },
      "node": {
        "name": "ip-172-20-32-60.us-east-2.compute.internal"
      },
      "pod": {
        "uid": "041cb6d5-5da1-4efa-b8e9-d4120409af4b",
        "name": "stream-hatchet-cron-manual-rvd-bh96h"
      },
      "namespace": "default"
    },
    "ecs": {
      "version": "1.5.0"
    },
    "host": {
      "mac": [],
      "hostname": "ip-172-20-32-60",
      "architecture": "x86_64",
      "name": "ip-172-20-32-60",
      "os": {
        "codename": "Core",
        "platform": "centos",
        "version": "7 (Core)",
        "family": "redhat",
        "name": "CentOS Linux",
        "kernel": "4.9.0-11-amd64"
      },
      "containerized": false,
      "ip": []
    },
    "cloud": {
      "instance": {
        "id": "i-06c9d23210956ca5c"
      },
      "machine": {
        "type": "m5.large"
      },
      "region": "us-east-2",
      "availability_zone": "us-east-2a",
      "account": {
        "id": "<redacted>"
      },
      "image": {
        "id": "ami-09d3627b4a09f6c4c"
      },
      "provider": "aws"
    },
    "stream": "stdout",
    "message": "{\"message\":{\"log_type\":\"cron\",\"status\":\"start\"},\"level\":\"info\",\"timestamp\":\"2020-10-08T08:43:18.670Z\"}",
    "input": {
      "type": "container"
    },
    "log": {
      "offset": 348,
      "file": {
        "path": "/var/log/containers/stream-hatchet-cron-manual-rvd-bh96h_default_stream-hatchet-cron-73069980b418e2aa5e5dcfaf1a29839a6d57e697c5072fea4d6e279da0c4e6ba.log"
      }
    },
    "agent": {
      "type": "filebeat",
      "version": "7.9.1",
      "hostname": "ip-172-20-32-60",
      "ephemeral_id": "6b3ba0bd-af7f-4946-b9c5-74f0f3e526b1",
      "id": "0f7fff14-6b51-45fc-8f41-34bd04dc0bce",
      "name": "ip-172-20-32-60"
    }
  },
  "fields": {
    "@timestamp": [
      "2020-10-08T08:43:18.672Z"
    ],
    "suricata.eve.timestamp": [
      "2020-10-08T08:43:18.672Z"
    ]
  }
}

Why is this happening?

hi @Kay_Khan, can you enable debug logging inside your beat and have a look at the logs, they might provide some information like parsing errors etc.

Okay after looking at the filebeat logs, i can see the following error/warning message that relates.

2020-10-08T09:25:43.562Z WARN [elasticsearch] elasticsearch/client.go:407 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x36b243a0, ext:63737745936, loc:(*time.Location)(nil)}, Meta:null, Fields:{"agent":{"ephemeral_id":"5f8afdba-39c3-4fb7-9502-be7ef8f2d982","hostname":"ip-172-20-32-60","id":"0f7fff14-6b51-45fc-8f41-34bd04dc0bce","name":"ip-172-20-32-60","type":"filebeat","version":"7.9.1"},"cloud":{"account":{"id":"700849607999"},"availability_zone":"us-east-2a","image":{"id":"ami-09d3627b4a09f6c4c"},"instance":{"id":"i-06c9d23210956ca5c"},"machine":{"type":"m5.large"},"provider":"aws","region":"us-east-2"},"ecs":{"version":"1.5.0"},"host":{"architecture":"x86_64","containerized":false,"hostname":"ip-172-20-32-60","ip":["172.20.32.60","fe80::af:9fff:febe:dc4","172.17.0.1","100.96.1.1","fe80::6010:94ff:fe17:fbae","fe80::d869:14ff:feb0:81b3","fe80::e4f3:b9ff:fed8:e266","fe80::1c19:bcff:feb3:ce95","fe80::fc68:21ff:fe08:7f24","fe80::1cc2:daff:fe84:2a5a","fe80::3426:78ff:fe22:269a","fe80::b871:52ff:fe15:10ab","fe80::54ff:cbff:fec0:f0f","fe80::cca6:42ff:fe82:53fd","fe80::bc85:e2ff:fe5f:a60d","fe80::e05e:b2ff:fe4d:a9a0","fe80::43a:dcff:fe6a:2307","fe80::581b:20ff:fe5f:b060","fe80::4056:29ff:fe07:edf5","fe80::c8a0:5aff:febd:a1a3","fe80::74e3:feff:fe45:d9d4","fe80::9c91:5cff:fee2:c0b9"],"mac":["02:af:9f:be:0d:c4","02:42:1b:56:ee:d3","62:10:94:17:fb:ae","da:69:14:b0:81:b3","e6:f3:b9:d8:e2:66","1e:19:bc:b3:ce:95","fe:68:21:08:7f:24","1e:c2:da:84:2a:5a","36:26:78:22:26:9a","ba:71:52:15:10:ab","56:ff:cb:c0:0f:0f","ce:a6:42:82:53:fd","be:85:e2:5f:a6:0d","e2:5e:b2:4d:a9:a0","06:3a:dc:6a:23:07","5a:1b:20:5f:b0:60","42:56:29:07:ed:f5","ca:a0:5a:bd:a1:a3","76:e3:fe:45:d9:d4","9e:91:5c:e2:c0:b9"],"name":"ip-172-20-32-60","os":{"codename":"Core","family":"redhat","kernel":"4.9.0-11-amd64","name":"CentOS Linux","platform":"centos","version":"7 (Core)"}},"input":{"type":"container"},"kubernetes":{"container":{"image":"700849607999.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4","name":"stream-hatchet-cron"},"labels":{"controller-uid":"a79daeac-b159-4ba7-8cb0-48afbfc0711a","job-name":"stream-hatchet-cron-manual-c5r"},"namespace":"default","node":{"name":"ip-172-20-32-60.us-east-2.compute.internal"},"pod":{"name":"stream-hatchet-cron-manual-c5r-7cx5d","uid":"3251cc33-48a9-42b1-9359-9f6e345f75b6"}},"log":{"level":"info","message":{"log_type":"cron","status":"start"},"timestamp":"2020-10-08T09:25:36.916Z"},"message":"{"message":{"log_type":"cron","status":"start"},"level":"info","timestamp":"2020-10-08T09:25:36.916Z"}","stream":"stdout"}, Private:file.State{Id:"native::30998361-66306", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc001c14dd0), Source:"/var/log/containers/stream-hatchet-cron-manual-c5r-7cx5d_default_stream-hatchet-cron-4278d956fff8641048efeaec23b383b41f2662773602c3a7daffe7c30f62fe5a.log", Offset:539, Timestamp:time.Time{wall:0xbfd7d4a1e556bd72, ext:916563812286, loc:(*time.Location)(0x607c540)}, TTL:-1, Type:"container", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x1d8ff59, Device:0x10302}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [log.message] of type [keyword] in document with id '56aHB3UBLgYb8gz801DI'. Preview of field's value: '{log_type=cron, status=start}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:113"}}

You can see this has the log that's lost with the following reason.

{"type":"mapper_parsing_exception","reason":"failed to parse field [log.message] of type [keyword] in document with id '56aHB3UBLgYb8gz801DI'. Preview of field's value: '{log_type=cron, status=start}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:113"}}

I don't know how to fix this, @MarianaD can you help me understand why this is happening?

I thought this maybe an issue with the "target": "log" so ive tried changing this to something arbitrary like "my_parsed_message" or "m_log" or "mlog" and i get the same error for all of them.

{"type":"mapper_parsing_exception","reason":"failed to parse field [mlog.message] of type [keyword] in document with id 'J5KlDHUB_yo5bfXcn2LE'. Preview of field's value: '{log_type=cron, status=end}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:217"}}

@MarianaD I don't understand what im doing wrong, is this a bug?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.