Multiline stacktrace in JSON not parsed by Filebeat/seen in Kibana

I use JSON Template Layout from Apache log4j to log to jsonl. Then I use Filebeat to ingest this:

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /srv/inception-stable/logs/inception-stable.log
    - /srv/inception-community/logs/inception-community.log
    - /srv/inception-testing/logs/inception-testing.log
    - /srv/inception-experimental/logs/inception-experimental.log
  json.keys_under_root: true
  json.add_error_key: true

  processors:
    - copy_fields:
        fields:
          - from: "log.file.path"
            to: "inception-instance"
        fail_on_error: false
        ignore_missing: true

setup.template.settings:
  index.number_of_shards: 1
  setup.template.enabled: true
  setup.template.overwrite: true

output.elasticsearch:
  hosts: ["localhost:9200"]

setup.kibana:
  host: "localhost:5601"

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

It shows up in Kibana, except for the exception.stacktrace field. I think it does not show up when it is too long, as for shorter stacktraces I can see it. Is there a limit or something on json decoding in Filebeat?

One Json entry
Corresponding Kibana entry

1 Like

Given that you are sending data directly from filebeat to elasticsearch without using logstash you might want to move this to the filebeat forum.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.