Simple parsing in logstash

Hey !

I have a question :slight_smile:

Imagine a log file that has different lines than :

My pattern is


in the first case, the pattern is ok ! BUT in the second case, the line doesn't match.

Empty fields are the problem I think ?
How can I make sure that my pattern adapt when the fields are empty?

Grok is great, but not necessarily the best filter for all types of data. In your case it might be easier to use the csv filter.

Ok i don't know csv filter, do you explain me or show me an example with my data ?

filter {
     csv {
       [ "name", "action", "date", "application" ]
       separator => ","
       convert => {"name" => "word", "action" => "word", "application" => "word"}

     date {
       match => [ "datelog", "UNIX" ]
       remove_field => ["datelog"]

Somethings like it ?

In which cases to use grok or csv ?

I also heard about multiline.

If you are the time @Christian_Dahlqvist , i will be very happy if you explain me with use case.

Which filter you use depends on the format of the data. In this case it seems like a natural fit for the csv filter. Sometimes you can also combine multiple filters, e.g. first separate out sections of the log using grok and then applying other filters to the various parts.

The csv filter parses everything as strings, so I do not know what you are trying to achieve with your convert statement. I also do not see any field named date log being extracted, so suspect your date filter may fail.

@Christian_Dahlqvist Thank you ! CSV filter is easier to use than grok (who ask value type ect....)

I have a question about csv filter !

 if [type] == "test_edr" {
      csv {
        columns => [ ".,..,..,..,.." ]
        separator => ","

Whats delete line header ? With an If or somethings like it ?

Tooo I try to convert fields, I have try this :

mutate {
  convert => { "edr_UsedTotalOctets" => "integer", "edr_GrantedTotalOctets" => "integer", "edr_Usage-Limit" => "integer" }

Maybe i must declare somes mutate for one field ?

It's doesn't work.

Thank for your help @Christian_Dahlqvist

The csv filter parses all fields as strings, so you will need to convert field types. There is no special handling of a header line, so what I often do when I have csv files with a header is to drop the record if I see that one of the fields parsed contains the expected header title (assuming I know no data lines has this value).

You say that it is not working, but not what is wrong. For us to be able to help, please show an example raw event as well as the result of the processing when you output this to the stdout plugin with a rubydebug codec.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.