Filbeat decode_csv_fields processor silently drops fields greater than 1024 characters

Hello. I am trying to process some csv files in filebeat with the decode_csv_fields processor, but it seems that it is silently dropping fields that are greater than 1024 characters.
Here is my filebeat processor configuration:

- type: filestream
  enabled: true
  paths:
    - /mnt/file.csv
  pipeline: "00-custom-pipeline"
  processors:
    - decode_csv_fields:
        fields:
          message: CSVfields
        separator: ","
        ignore_missing: false
        overwrite_keys: true
        trim_leading_space: false
        fail_on_error: true

For debugging purposes I also configured an extract_array processor and it confirmed that any field greater than 1024 does not shows up.

Is this limit configurable or a hard limitation that I can't overcome?

Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.