Catching/Logging Events That Break Parsing

Hi Everyone,

So I am using ELK to parse logs with the help of filebeat.

I ran into an issue today where ~900 log lines were dropped when they should not have been. I know which lines these were. However, when I looked through the logstash, elasticsearch, and filebeat logs I could not find any mention of these dropped lines.

So I am curious, does anyone write logstash configs in such a way that if an event is dropped, either the raw data is still indexed, or at least a document is created which would help in fixing the config?

I've seen some of my SQL colleagues put "debugging" lines in their stored procs which write to a log say things like "sp_example_1 failed at step 8 of 12". Is there a logstash equivalent to this?

Have you looked into the dead letter queue feature?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.