How to parse this json

I am new to logstash and have been trying to parse the below json with no luck.

"stat": "user",
"Ref": "USER:[5000,John Smith,3A37332D2659554F9CCE0CD754185269]",
"statistics": [


The main thing I am trying to do is put each statistic into its own field, each line under statistic represents a metric from the source system.

example output:

  statistic.Status => "IDLE"
  statistic.Duration => "1505"

Can anyone point me in the right direction to get started with this?

I think you need to convert the statistics array into hash (“key” => “value”) and for that you need a second array containing hash keys (column names).

First, you can use json_lines (or json) codec as input in order to parse the incoming event(s) as json, then in filter section you can use ruby filter to join the two arrays into a hash.
Something along these lines:

Keys = [“Status”, “Duration”, ....]
Event.set( Joined, Hash[“statistics”).map {|i| i})] )

It is probably better to inject the column names (keys array) early on with add_field etc. At least it is much cleaner and easier to maintain. Then you can just read it in ruby filter (event.get()).

PS. Sorry for the auto-capitalization of the variable names. I’m writing this on my phone.

Thanks for the reply but I thought it would be a much easier task to process that data in the statistics field. My initial idea was I thought I would be able to use the CSV filter on the statistics field but that doesn't seem to work. Same with gsub, if I try to do anything to the statistics field I get the error:

 gsub mutation is only applicable for strings and arrays of strings, skipping {:field=>"statistics"

I'm failing to understand why its so difficult to work with this field, I'm new to logstash but I have been able to get it to do amazing things on logs that are much more complicated than this.

The ruby debug output for the statistics field looks like this, surely there is a simple way I can create a field for each number.

"statistics" => [
    [ 0] "0",
    [ 1] "",
    [ 2] "",
    [ 3] "0",
    [ 4] 0,
    [ 5] 0,
    [ 6] 0,
    [ 7] "",
    [ 8] 0,
    [ 9] "",
    [10] "0",
    [11] 0,
    [12] "IDLE",
    [13] "5017",
    [14] 1051
    mutate { join => { "statistics" => "," } }
    csv { source => "statistics" autogenerate_column_names => true }

Obviously you do not actually want autogenerate_column_names, you would use the columns option.

1 Like

That is exactly what I wanted I knew it would be this simple, thanks so much. I must have tried every possible way except this, so simple in the end.

My initial reaction was that @admlko was right and this required a ruby filter. It was only when you said you wanted to use a csv filter that I realized that could be done.

Well thanks to the both of you anyway :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.