Special occurences


I use my ELK stack stock logs, and sometimes, logs will be in this format
field1, field2, field3,
and so on, so I basically CSV to separate the fields. A little problem rose... Sometimes, the field will actually be like this:
field1, "fie, ld2", field 3

The way I thought of to solve this issue would be to count the number of commas: if there are more than the expected number, it means that field2 contains one, and therefore, I should use the quotes to see whats in there...
However, I could not find a way to do that in Logstash.

Does somebody have an idea how to count the commas, or maybe a better way to proceed?


A csv filter will handle quoted fields, but the entire field has to be quoted, with no leading space. So this might work, but it is going to be fragile.

    mutate { gsub => [ "message", ', "', ',"' ] }
    csv { source => "message" }

Hi Badger,

If I'm not mistaken, your solution supposes that every single log is written

field1, "fie, ld2", field3

As I have stated it is not always the case.

I do not believe I have made that supposition.

Oh yes, my bad, I misunderstood your code. If (now) understand correctly, your code will delete any comma that is between quotes, right?

If that is the case, although I thank you for this idea, is there no way to conserve the comma? I would prefer not altering the logs...

No, the gsub remove a single space between a comma and a double quote.

If you really need to preserve the exact format of the logs you'll could fork the csv filter code and rewrite it to handle partially quoted fields.

Ah, I see!
I'll try and see if it works. Thank you very much!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.