Logstash - Is it Possible To Handle Dissect Failures Without Filling Up Syslog

Hello ELK Community,

I have a pipeline that processes a certain type of log file, depending on where the log file was generated the fields may be different, this causes my dissect to fail.

With the following code block I can catch the failure, run a GROK and parse it successfully(or run a second dissect):

 if ("_dissectfailure" in [tags]) {
        grok {
            match => {"message" => [ "..." ] }
            remove_field => [ "tags" ]

The problem is my syslog logs an error message for every dissect failure, this causes disk space issues and the people. Is there a way to better handle this scenario? I think setting tag_on_failure to false still causes the ERROR to log.

What gets logged?

Aug 24 12:19:02 machinename.fqdn.com logstash[23808]: [2020-08-24T12:19:02,454][WARN ][org.logstash.dissect.Dissector] Dissector mapping, pattern not found {"field"=>"message", "pattern"=>"%{logTimestamp} %{logClass} %{} %{logLevel} %{logMessage}", "event"=>{}}

I removed the event data.

The dissector is quite aggresive about logging warnings, that is why it recommends doing a pattern check to be sure the dissect will work.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.