Multiline codec help for SYSLOG files

Hello,

I am trying to combine the following multiline output into single event using Logstash:

01,03,APR 10 16:25:45:453530,timer,ERROR ,"GFI Timer Error ==>"
01,03,APR 10 16:25:45:453530,timer,ERROR ,"No 14: Date (MON 04/10/17 16:25:45:453015 ), Error Type (CANCEL_INVALID_BLCOK), cpu(1)"
01,03,APR 10 16:25:45:453530,timer,ERROR ," modid(module(0)), flags(0x00000000), type(1), exp_time(318070975), timeout(16744),"
01,03,APR 10 16:25:45:453530,timer,ERROR ," data1(7c83), data2(1afda9c8), func(10090f04), create_ra(0x1008e954), tm_cancel_ra(0x1008ffbc)"
01,03,APR 10 16:25:45:453530,timer,ERROR ," bt: 1003c1e8 <- 1008f840 <- 1008ffdc <- 1009055c <- 10091cb8 <- 101ff258"
01,08,APR 10 16:25:45:452804,timer,ERROR ,"GFI Timer Error ==>"
01,08,APR 10 16:25:45:452804,timer,ERROR ,"No 09: Date (MON 04/10/17 16:25:45:452233 ), Error Type (CANCEL_INVALID_BLCOK), cpu(1)"
01,08,APR 10 16:25:45:452804,timer,ERROR ," modid(module(0)), flags(0x00000000), type(1), exp_time(324860186), timeout(16744),"
01,08,APR 10 16:25:45:452804,timer,ERROR ," data1(7c83), data2(1afdade8), func(10090f04), create_ra(0x1008e954), tm_cancel_ra(0x1008ffbc)"
01,08,APR 10 16:25:45:452804,timer,ERROR ," bt: 1003c1e8 <- 1008f840 <- 1008ffdc <- 1009055c <- 10090688 <- 101eabd4"
01,10,APR 10 16:25:49:527170,DP_CHECK,ERROR ,"tq_check_array_index_validity FAILED (23 same log suppressed)"

The pattern for the SYSLOG would be the following, or at least the timestamp
01,03,APR 10 16:25:45:453530
chassis, slot, timestamp

or just timestamp with milliseconds...
APR 10 16:25:45:453530

My initial idea for pattern match was to remove the 6 digit milliseconds, but this will end up giving me same timestamp for multiple events.
Another idea is to use [chassis,slot,timestamp}, but so far no luck...

I have tried the following:
filter {
grok {
pattern => ["%{DATESTAMP:LOGTIME}"]
}
date {
match => ["LOGTIME", "MMM dd HH:mm:ss:SSSSSS"]
timezone => "UTC"
remove_field => ["LOGTIME"]
}
multiline {
pattern => "(^%{NUMBER:chassis},%{NUMBER:slot},%{WORD:MONTH} %{NUMBER:DAY} %{TIME:LOGTIME})"
negate => true
what => "previous"
}

grok {
match => { "message" => "%{NUMBER:chassis},%{NUMBER:slot},%{WORD:MONTH} %{NUMBER:DAY} %{TIME},%{WORD:UNIT_REPORTING},%{WORD:LOGLEVEL} ,%{GREEDYDATA:syslog_message}"}
}

and I have tried the following through CSV filter, per something I found on the web.. no luck so far:

filter {

csv plugin takes the input lines from the file, and breaks down into defined columns below

csv {
   columns =>
["Chassis",
 "Slot_ID",
 "LOGTIME",
 "reporting_type",
 "event_level",
 "log_message"]

separator => "," 

remove_field => ["message"]

    }

remove all 6 digits after seconds to meet time standard

    mutate {
       gsub => ["LOGTIME", ":\d{6}", ""]
	   add_field => { "LOG_TYPE" => "SYSLOG" }
           }

    mutate {
       convert => {
'Chassis' => 'integer'
'Slot ID' => 'integer'
               }
          }

date {
match => ["LOGTIME", "MMM dd HH:mm:ss"]
timezone => "UTC"

remove_field => ["LOGTIME"]

}

Can someone guide me and let me know the mistakes I am making?

Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.