How to parse mix json logs

Test the format before parsing...

    dissect { mapping => { "message" => "%{ts} %{+ts} | %{restOfLine}" } }
    if [restOfLine] =~ /{.*}/ {
        json { source => "restOfLine" }
    } else if [restOfLine] =~ /\[.*\]/ {
        mutate { gsub => [ "restOfLine", "^\[", "", "restOfLine", "\]$", "" ] }
        csv { source => "restOfLine" }
    } else {
        # Handle other format
    }

Thank you so much for your kind help its working.but i am getting mutiple fields named column in kibana why this is so,could you please explain me.i am adding screen short of it here

if it is not necessary could we remove this.

I assumed you wanted to parse that as a csv. If you do not supply the column names the csv will generate them.

ok got it,Thank you Bader.

hi Badger,
Now I wanted to parse this as csv and want to provide column names for each value which are separated by comma here

V4,0833364F,533.330,0,0,533.330,0,0,0,0,-0.849,0,0,-0.849,628.064,0,0,628.064,432.013,
431.847,433.645,430.547,249.423,247.824,251.089,249.355,1.055,0,0,3.164,49.975,38982176.000,0
,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,19/02/03,23:52:1234:

Thank you please help

OK, so use the columns option on the csv filter.

Yes I can but the problem is values are not fixed in size they may vary.how do I create cloumn names dynamically as per the number values seperated by comma.

Or do I map only few values with same number of columns like I want to map first two values and last one value which is in date formats,and it should come up with date data type in kibana

If you require a different number of columns for different events then you could use a conditional to decide which csv filter to use. Something like

if [message] =~ /.*,.*,.*,.*,.*/
    csv {} # events with 5 fields
else if [message] =~ /.*,.*,.*,.*/
    csv {} # events with 4 fields
[...]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.