Looking more closely I see that your log isn't quite in syslog format so using a custom grok filter makes sense. But you're trying to parse the event_timestamp field which doesn't exist. In fact, you're not extracting any fields from the timestamp. This should give you an event_timestamp field that you can parse:
I think I could make it partially work. I.e "timestamp" is replaced partially. Everything except "hour" is replaced properly. Please let me know if I am doing something wrong.
Below is my logstash config.
filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{WORD:ProcessName}\(%{INT:ProcessId}\) %{WORD:Status}\: \{%{INT:ProcessId}\} %{WORD:SubProcessName} %{WORD:LatencyDetails} \in the last period for host %{GREEDYDATA:DgwHost} \was %{INT:Latency}" }
}
mutate {
convert => { "Latency" => "integer" }
}
date {
match => [ "timestamp" , "MMM dd HH:mm:ss.SSSSSS" ]
}
}
And my output is as below.
"message" : "Mar 29 11:56:15.410045 diameterBeClient(16044) NOTICE: {970020} DCD DIAMETER_MIN_LATENCY in the last period for host dgw1.example.com was 5845"
"@timestamp" : "2017-03-29T15:56:15.410Z"
"timestamp" : "Mar 29 11:56:15.410045"
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.