There appears to be a years-long-standing issue with large ( longer than 16KB) messages getting split into parts and appearing on Kibana in multiple lines. Such long messages typically include Java exception stack traces.
As much as I have searched I have not found a filter that can concatenate those parts and make them appear as a whole on a single log entry, where the message can be properly parsed and indexed. I have tried a few filters of my own, with little success.
Kibana only shows the messages that are in Elasticsearch, if for example you have a Java Exception Stack Trace split in multiple documents, the issue happens during the ingestion of the data and can be solved using a multiline codec that exists both in Logstash and Filebeat.
Thanks for responding!
I'm using EFK, i.e. FluentD, it' s running as a Helm chart on an AWS EKS cluster. This link describes the problem in some detail, and contains a proposed solution by user=dsx0, one that I tried but didn't work for me. EFK is only a very small part of my job, so I'm not well-versed in it. Still it's up to me to fix this nagging problem.
Unfortunately the issue is not related to Kibana or any tool in the Elastic stack.
Your messages are being splitted before they arrive in Elasticsearch and each part of the message is treated as an individual document.
The way to solve this is to combine the parts of the splitted message and reconstruct it before sending to Elasticsearch, both Logstash and Filebeat, which are part of Elastic Stack have a multiline codec that helps you reconstruct the messages, but I have no idea if FluentD has something similar.