Split CSV data from beats

We are new to ELK and need help with our Log Stash filter. We don’t need to filter any rows, just separate the data into their respective columns and use the Timestamp column as a Timestamp

*.log  File Beats  Log Stash  Elastic Search
Log files are text based CSV using “|” as a separator
Each log line begins with [
Log lines may span multiple rows
Log line format: [Timestamp]|UtcOffset|Level|Code|Reference|Message

[2018-03-05 15:26:36.456]|-420|DEBUG|C001|row: 1|my useful message
[2018-03-05 15:29:05.137]|-420|INFO|C042|emp: 1002|another useful log message

What I have so far:
input {
beats {
host => "localhost"
port => "5044"
}
}

filter {
csv {
columns => ["TimeStamp","UtcOffset","Level","Code","Reference","Message"]
separator => "|"
skip_empty_columns => "true"
}
}

output {
elasticsearch {
hosts => "localhost:9200"
index => "PSS_Log_%{+YYYY.MM.dd}"
}

stdout { }

}

With csv filter you wrote can do this.

date {
    match => ["TimeStamp", "TIMESTAMP_ISO8601"]   // This will match timestamp and put in @timestamp field
  }

If you want to remove from Timestamp field you can use gsub to remove them before applying date filter.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.