Separate a field generated by the kv filter

Hello everyone, I tell you my situation. I'm filtering a fortigate log using the kv filter as follows:

  kv {
      source => "message"
      target => "kv"
   }

(I would like to emphasize that the fields that come in the log are not always the same)

When I generate the index by entering the kibana interface, it generates the following field:

"kv": {
  "devname": "FGT-HA",
  "logver": "56",
  "vd": "vfw-wifi",
  "appcat": "Network.Service",
  "sentpkt": "1",
  "srcip": "x.x.x.39",
  //more fields...
}

I would like not to have everything encapsulated in the kv field, but rather independent fields. If I do not put "target" in the filter, the index is not generated, I've already tried many iterations of this. This is why I am approaching this problem on the basis that I already have the generated kv field.

What can I do to separate this "KV" field into separate fields?

I appreciate your help!

By default a kv filter will create fields at the top level. My guess is that one of those fields is causing a mapping error. Are there any errors logs in either the logstash or elasticsearch logs?

Hello Badger! Thanks for your response.

In log of elasticsearch nothing appears to me, but in the logstash I get the following:

[2019-06-19T22:10:00,164][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2019-06-19T22:10:00,832][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2019-06-19T22:10:01,977][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x6f8fedb2 run>"}
[2019-06-19T22:10:32,040][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2019-06-19T22:10:35,404][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-06-19T22:10:35,960][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://x.x.x.20:9200/, http://x.x.x.21:9200/]}}
[2019-06-19T22:10:35,973][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://x.x.x.20:9200/, :path=>"/"}
[2019-06-19T22:10:36,191][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://x.x.x.20:9200/"}
[2019-06-19T22:10:36,276][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-06-19T22:10:36,283][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-06-19T22:10:36,285][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://x.x.x.21:9200/, :path=>"/"}
[2019-06-19T22:10:36,295][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://x.x.x.21:9200/"}
[2019-06-19T22:10:36,339][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//x.x.x.20:9200", "//x.x.x.21:9200"]}
[2019-06-19T22:10:36,511][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-06-19T22:10:36,547][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-06-19T22:10:37,568][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_33001b50f43dd792b5a2d34d2425c7fa", :path=>["/var/log/fortigate/2019-06-19-22.log"]}
[2019-06-19T22:10:37,601][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x78e71330 run>"}
[2019-06-19T22:10:37,665][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-06-19T22:10:37,715][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-06-19T22:10:38,148][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

In the end it turns out that everything was executed satisfactorily, is there an error that is being overlooked?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.