Csv: map each field name to an array index

Hi, I have a csv file with 2000 fields and I need only 5 from them and I know their position. It's easy to get that 5 fields using filebeat (decode_csv_fields and extract_array) because everything that I need it just map each field name to an array index.
But how can I do the same in logstash? It looks like in logstash you need specify all fields before be able to access to them. For me it is overkill because I need only 5 from 2k

You may be able to achieve what you want doing a little trick with the split action of the mutate filter.

You will not use the csv filter to parse it, but use the mutate split to transform your message field into an array, then you will add the files based on the array index.

Something like this:

filter {
    mutate {
        split => { "message" => "," }
    mutate {
        add_field => {
            "field_1" => "%{[message][0]}"
            "field_2" => "%{[message][1]}"
            "field_N" => "%{[message][N]}"

This may not work correctly if you have values with , in it in your CSV.

It's working!!! Thanks a lot!

You can also use prune plugin for whitelist names.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.