Copy field and gsub it

I am trying to remove the translate filter to get this going, the original type is number so I'm not sure if i need the convert section, I have used gsub with strings without a problem. Can this be done?
The new field is getting created (netflow.dinfo) but the gsub is not working for me

filter {
   if [host] == "192.168.199.1" {
   mutate {
            copy => { "[netflow][direction]" => "[netflow][dinfo]" }
   convert => { "[netflow][dinfo]" => "string"}

   gsub => [
   "netflow.dinfo", "0", "0 - Ingress",
   "netflow.dinfo", "1", "1 - Egress"
   ]


   } #endmutate
   } #endif
   } #filter

There are two reasons why this isn't working:

  • Mutate operations aren't evaluated in the order listed in the config file. The order is fixed (see below) and operations that depends on each other like in this case must be separated in different filters.
  • The syntax for nested filters is always [netflow][dinfo] and never netflow.dinfo.
2 Likes

Thank you very much for the explanation Magnus

So it will never work because gsub has higher order than copy?

I've tried following the order you posted and used rename instead of copy since it's higher. It appears to be working but the type in kibana is set to number instead of string. Do I need to delete the index and data before it will pick up the new type?

{
   "netflow" => {
        "output_snmp" => 2,
             "dst_as" => 0,
           "dst_mask" => 17,
            "in_pkts" => 14,
      "ipv4_dst_addr" => "192.168.192.105",
            "src_tos" => 0,
     "first_switched" => "2017-10-25T21:49:59.999Z",
         "flowset_id" => 257,
        "l4_src_port" => 443,
      "ipv4_next_hop" => "192.168.199.2",
           "src_mask" => 24,
            "version" => 9,
       "flow_seq_num" => 1628119,
      "ipv4_src_addr" => "192.168.101.12",
           "in_bytes" => 8257,
           "protocol" => 6,
      "last_switched" => "2017-10-25T21:49:59.999Z",
         "input_snmp" => 5,
          "tcp_flags" => 27,
    "flow_sampler_id" => 0,
        "l4_dst_port" => 59148,
             "src_as" => 0
},
"@timestamp" => 2017-10-25T21:50:00.000Z,
  "@version" => "1",
      "host" => "192.168.199.1",
      "type" => "netflow",
      "tags" => [
    [0] "netflow",
    [1] "NDC",
    [2] "Cisco 2901 Router"
],
 "direction" => "0 - Ingress"

}

[2017-10-26T09:02:15,564][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.10.25", :_type=>"netflow", :_routing=>nil}, 2017-10-25T22:02:31.000Z 192.168.199.1 %{message}], :response=>{"index"=>{"_index"=>"logstash-2017.10.25", "_type"=>"netflow", "_id"=>"AV9VjrB5cftdLdKJKfte", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [direction]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: "1 - Egress""}}}}}

EDIT:
After deleting the index it picked up the right type so seems all good, so it does work

I'm still trying to find out if I can actually keep the original field and create the second one

While reading up on the filters I ended up using this:
if [host] == "192.168.199.1" {
mutate { add_tag => [ "NDC", "Cisco 2901 Router" ] #}
add_field => { "[netflow][direction_description]" => "%{[netflow][direction]}" }
}
}

filter {
       if [host] == "192.168.199.1" {
       mutate {
       gsub => [
   "[netflow][direction_description]", "0", "0 - Ingress",
   "[netflow][direction_description]", "1", "1 - Egress"
   ]
   }
   }
   }

Seems to be much cleaner, the only thing I've noticed in Kibana when I try to visualize I end up with a third field:
netflow.direction_description.keyword: Descending Sum of netflow.in_bytes
1 - Egress 317,396,025
0 - Ingress 290,489,442
%{[netflow][direction]} 0

So it will never work because gsub has higher order than copy?

Yes. You need to use separate mutate filters.

Seems to be much cleaner, the only thing I've noticed in Kibana when I try to visualize I end up with a third field:

Yes, because text fields by default are analyzed so a .keyword subfield containing the unanalyzed tring is added automatically. In this case you should set the field to be unanalyzed (a keyword field rather than a text field) from the beginning. Read about field analysis in the ES documentation.

1 Like

Can you please expand a bit, even with separate mutate filters the order will be still be gsub before copy so I can't use copy ?

Separate filters are run the order they're listed in the configuration file but within the same filter the fixed order applies.

Thanks Magnus will give that a shot tomorrow

EDIT: At first it didn't work, only to find out it was a syntax error

So in separate filters I've used this order
copy
convert (to string)
gsub

I've learned so much from you, many thanks Magnus. Your second post in particular was so immensely helpful in understanding the process

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.