Unable to extract the message filed from the logstash filter

I am trying to split my message into different fields . I used Grok filer on logstash but I am still getting the same message content in the Kibana discover dash board. My message as follows.

2019-8-16T18:37:16.45 CT3 45.282

Want to extract CT3 and 45.282 dont want time stamp.

` input {
beats {
port => 5044
}
}

filter {
grok {
match => { "message" => "%{USERNAME:user} %{NUMBER:duration}" }
remove_field => [ "message" ]
}

}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

}
}`

I already tested tested with grove debugger , I got the result as follows

%{USERNAME:user} %{NUMBER:duration} 

{
"user": [
[
"CT3"
]
],
"duration": [
[
"45.282"
]
],
"BASE10NUM": [
[
"45.282"
]
]
}

But in Kibana I am getting the same message as follows

2019-8-16T18:37:16.45 CT3 45.282Preformatted text

If your message is always a fixed format better use dissect for extracting values. It's much simpler

Also, for debugging purpose please send Logstash output to stdout rather than ES

output {
  stdout { codec => rubydebug }
}

Could you please explain little bit about dissect for extracting values. I want to extract values and plot the graph in kibana.

Hi Arun,
i was suggesting output to stdout for convenience. Your final config after the fix should output to ES and then you can plot graph in Kibana

input {
  beats {
    port => 5044
  }
}

filter {
  if "django" not in [log][file][path] {
    dissect {
      mapping => {
        "[log][file][path]" => "/%{}/%{}/%{}/%{}/%{}/%{task_log_folder}/%{}"
      }
    }
    mutate {
      split => { "task_log_folder" => "_"  }
      add_field => { "jobID" => "%{[task_log_folder][0]}" }
      add_field => { "taskID" => "%{[task_log_folder][1]}" }
      add_field => { "taskVersion" => "%{[task_log_folder][2]}" }
    }
    mutate {
    	convert => { "jobID" => "integer" }
    	convert => { "taskID" => "integer" }
    	convert => { "taskVersion" => "integer" }
    	remove_field => ["task_log_folder"]
    }
  }
}

output {
	stdout { 
	     codec => rubydebug { } 
	}
}

This is an example that i created where i was extracting values from folder structure in file path.
The mapping in my case can also be done as
mapping => {
"[log][file][path]" => "/%{}/%{}/%{}/%{}/%{}/%{jobId}_%{taskId}_%{taskversion}/%{}"
}
I was experimenting with split.

In your case, you will be mapping "message" field. {} is used when you are not interested in the field. Try out an example with simple messages

That grok works for me

  "duration" => "45.282",
      "user" => "CT3"

Are you sure there is a single space between the two fields? Spacing matters in both grok and dissect.

Yes , single space between two fileds. How do you get the value for duration and user
< "duration" => "45.282",
"user" => "CT3" >

input { generator { count => 1 lines => [ '2019-8-16T18:37:16.45 CT3 45.282' ] } }
filter {
    grok { match => { "message" => "%{USERNAME:user} %{NUMBER:duration}" } remove_field => [ "message" ] }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.