Filebeat journald input truncates custom fields at ~64KiB

I have an issue where custom fields in systemd-journald entries are being truncated at ~64KiB. The regular "MESSAGE" field doesn't seem to be affected, only custom fields.

I can't seem to find an option in the docs that might allow for longer fields, am I missing something obvious?

To reproduce:

# filebeat.yml
filebeat.inputs:

- type: journald
  include_matches.match:
    - SYSLOG_IDENTIFIER=test

output.file:
  path: "/tmp/filebeat"
  filename: filebeat
// Create a journal event with a custom field called "data", with 89784 chars of text
const Journald = require('systemd-journald');
const logger = new Journald({syslog_identifier: 'test'});
const data = 'abcdefghijklmnopqrstuvwxyz...'.repeat(3096)
logger.info('Test message', {"data": data}) 

Verify that the length is correct in systemd-journald (aside from an added trailing newline)

journalctl MESSAGE="Test message" -o json --all --no-pager|jq -r .DATA | wc -m
> 89785

Look at the filebeat output, see that it has been truncated.

cat /tmp/filebeat/filebeat-20240207.ndjson | jq -r .journald.custom.data | wc -m
> 65532

If I enable debug logging in Filebeat I see the truncated message in a processing/processors.go "Publish event" log message, so I suspect it's an input issue, not an output issue. In production I use a logstash output and have the same problem.

Any suggestions how I might get round this limit?

Optimistically giving this a bump. I've started digging around the source, but my go-fu is weak and I don't immediately see any obvious issues.

Hi, is this an AI generated answer? The listing of not-quite-relevant suggestions, ending with a summary, sounds like a lot of the LLM output I've seen recently.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.