KV Filter Question

Hi,

I can now succesfully recieve Logs from my Synology. Now i have to add some fields. At the moment i don't have everything as a field in Kibana. So in Kibana the message contains the import information.

The message looks like that: WinFileService Event: delete, Path: /TEST/asd, User: Hans.Peter, IP: 192.168.0.29

So my question how does the KV filter looks like I have to build that I have in the end all fields to filter in Kibana?

Thanks for your help.

What have you tried so far?

Have a look at the grok filter: https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

Useful links for that:

So I tried with the Grok Filter but it didn't so I wanted to try the KV Filter.

200_synology.conf

filter {
  grok {
    match => "message" => "%{GREEDYDATA:message} }"
}

I still don't understand everything. The logs ar comming from a Synology via port: 5514 TCP.
So I have configured an input and output file. You can view it on pastebin: https://pastebin.com/Dar5QajU

I hope you can give me now some hints/help.

Thanks.

Your grok filter doesn't do anything useful. You're parsing the message field, matching the whole string, and storing the matched result in the message field. I don't know what you're trying to accomplish.

You're right and with this filter "enabled" it doesn't work at the moment. Could you give me an example how i have to do it when I want to filter: WinFileService Event ? I close to the solution but in my head somwhere I don't understand something wrong.

kv {
  value_split => ": "
  field_split => ", "
}

You'll probably want to use a grok filter to extract the first word ("WinFileService") to a separate field so that "Event" becomes the first key/value field that's extracted (rather than "WinFileService Event").

I'm a little bit further. The word event should be in the same field as WinFileService.

[2017-04-24T22:39:34,856][ERROR][logstash.agent           ] fetched an invalid config {:config=>"input {\n#  file {\n#    path => \"/tmp/in.log\"\n#    type => \"logfile\"  # a type to identify those logs (will need this later)\n#    start_position => \"beginning\"\n#   }\n# udp {\n#     port => 5555\n#     type => \"udp\"\n#   }\n# tcp {\n#     port => 5555\n#     type => \"tcp\"\nsyslog {\n    port => 5514\n    type => \"syslog\"\n}\n}\n\noutput {\n#  file {\n#    path => \"/tmp/out.log\"\n#  }\n  elasticsearch { hosts => [\"localhost:9200\"]\n     hosts => \"localhost:9200\"\n  }\n  if [@metadata][cthostmeta] == \"ELKSTACK\" {\n    file {\n       codec => rubydebug { metadata => true}\n       path => \"/tmp/cthost_out.log\"\n    }\n  }\n}\n\nfilter {\n\n\tif [host] == \"192.168.0.20\" {\n\t\n\tgrok {\n\tmatch => { \"message\" => [\n\t\t\t\"WinFileService Event:WinFileService_WMA}%{GREEDYDATA:message}\",\n\t\t\t\"%{GREEDYDATA:message}\"\n\t\t\t\n\t      }\n\toverwrite => [ \"message\" ]\n\t}\n}\t\n\n", :reason=>"Expected one of #, {, ,, ] at line 44, column 8 (byte 787) after filter {\n\n\tif [host] == \"192.168.0.20\" {\n\t\n\tgrok {\n\tmatch => { \"message\" => [\n\t\t\t\"WinFileService Event:WinFileService_WMA}%{GREEDYDATA:message}\",\n\t\t\t\"%{GREEDYDATA:message}\"\n\t\t\t\n\t      "}

My filter looks now like that but it doesn't work as you can see:

filter {

    if [host] == "192.168.0.20" {

    grok {
    match => { "message" => [
                    "WinFileService Event:WinFileService_WMA}%{GREEDYDATA:message}",
                    "%{GREEDYDATA:message}"

          }
    overwrite => [ "message" ]
    }

}

I think I'm getting closer to the solution.

You're never closing the [ in match => { "message" => [.

Thanks I have tried different variants. Unfortunately I'm not lucky.

filter {

    if [host] == "192.168.0.20" {

    grok {
    match => { "message" => [
                    "%{WinFileService Event:WinFileService_WMA}%{GREEDYDATA:message}",
                    "%{GREEDYDATA:message}"
             ]
          }
    overwrite => [ "message" ]
    } }

How is it with teh space between: WinFileService and Event ? How I have to do it because in the log file ther is an: ":" after Event?

Your grok expression doesn't make sense. Try this instead:

WinFileService %{GREEDYDATA:kv}

This'll ignore the initial "WinFileService" and store the rest in the kv field. Then feed the kv field to the kv filter.

Hmm I think I have to explain again what I want to do.

I want to greate fields for the information which is in the message field.

The message looks like that: WinFileService Event: Delete, Path: /Privat/asd, File/Folder: Folder, Size: NA, User: Hispeed, IP: 192.168.0.29

So my idea was to do it via KV Filter. Then i switched to the grok filter. I want to have in the end the following fields:

  • WinFileService Event
  • Path
  • File/Folder
  • Size
  • User
  • IP

In the end I can then filter in Kibana for those fields. Because I'm still learning I started with the first field and not all together.
Which solution is the best to do that: grok or kv?

You can use either, but kv is simpler. If you really want "WinFileService Event" to be a key you don't need a grok filter at all.

Ok, so then I use the KV Filter. At the moment I have that:

filter {


    kv  {
            add_field => {
            "WinFileService Event%{WinFileServiceEvent_WMA}"
    } }

On the elastic website sample it looks like that:

filter {
  kv {
    add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
  }
}

I can't put the things together in my head.
Which one is "foo" from my log message? Is "somefield" = "WinFileServiceEvent_WMA"?
What about Hello World, from %{host} ?

Sorry for those really basics questions...

No! Start with my previous suggestion.

kv {
  value_split => ": "
  field_split => ", "
}

Ahh now I got it.

I have now:

filter {
        kv  {
                value_split => ":"
                field_split => ", "
        }
}

This is not bad, but of course not perfect.
IP = OK
User = only the firstname so i have User: firstname lastname (This means lastname is cutted of)
Path= Is starting correct but there is the same problem as in the User field when there is a space.
How can I now fix that with the spaces?

I made a picture then you can see the output and the problem.

Aha. Remove the trailing space from field_split so that there's just a comma. Multi-character field splits aren't supported. You may have to set trim_key to a space so avoid getting a leading space in the field names.

I tried that to do but ti doesn't work. I get an error.

filter {
        kv  {
                value_split => ":"
                field_split => ","
                trim_key => " "
        }
}

Error:

[2017-04-26T15:06:09,097][ERROR][logstash.agent ] fetched an invalid config {:config=>"input {\n# file {\n# path => "/tmp/in.log"\n# type => "logfile" # a type to identify those logs (will need this later)\n# start_position => "beginning"\n# }\n# udp {\n# port => 5555\n# type => "udp"\n# }\n# tcp {\n# port => 5555\n# type => "tcp"\nsyslog {\n port => 5514\n type => "syslog"\n}\n}\n\noutput {\n# file {\n# path => "/tmp/out.log"\n# }\n elasticsearch { hosts => ["localhost:9200"]\n hosts => "localhost:9200"\n }\n if [@metadata][cthostmeta] == "ELKSTACK" {\n file {\n codec => rubydebug { metadata => true}\n path => "/tmp/cthost_out.log"\n }\n }\n}\n\nfilter {\n\tkv {\n\t \tvalue_split => ":"\n\t\tfield_split => ","\n\t\ttrim_key => " "\n\t}\n}\t\n\n", :reason=>"Something is wrong with your configuration."}

What I'm doing wrong with: trim_key ?

It appears the trim_key option was renamed rather recently. Consult your version of Logstash for details, but you should probably set trim instead.

I have set trim and with this is works nearly perfectly. Now I could do a somehow a beauty correct because if there is now a space in the message for example in the Path:

Original:
Path: /Diverses/Diverses unsortiert/Schieb-Report.50.Hammertips.-.Windows.7.pdf

Field:
Path: /Diverses/Diversesunsortiert/Schieb-Report.50.Hammertips.-.Windows.7.pdf

You can see that the space between: "Diverses" and "unsortiert" is cutted of. Is there a simple way to fix that?
Thank you so far for the support. I'm already very happy :).