Hi,
I know, that the field @timestamp can not be deleted. So I want to override @timestamp with the timestamp of logs. I've created a new field. The timestamp in logfile looks like "13.07.2017 07:11:10 ..." my new field LOGTIMESTAMP "13.07.2017 07:11:10", but @timestamp "17.07.2017 11:29:24".
I parse the logfile in this way ->
grok {
match => [ "message", "%{DATESTAMP:LOGTIMESTAMP}" ]
}
date {
match => ["LOGTIMESTAMP", "dd.MMM.YYYY hh:mm:ss"]
target => "@timestamp"
locale => "de"
remove_field => [ "LOGTIMESTAMP" ]
}
What am I doing wrong? Thanks for your help.
If the date filter fails the log contains clues. I can immediately see that MMM in your date pattern should be MM and, unless you use a 12-hour clock with no am/pm marker, hh should be HH.
Thanks for your answer. I've made the adjustments as suggested, without success. I can not get any error messages from the logfiles. Kibana shows me an _dateparsefailure, otherwise no further error-informations ...
Works fine for me with both Logstash 2.3 and 5.4. Please copy/paste an example event from Kibana. Use the JSON tab in the Discover view so we get the raw JSON representation.
$ cat test.config
input { stdin { } }
output { stdout { codec => rubydebug } }
filter {
date {
match => ["message", "dd.MM.YYYY HH:mm:ss"]
locale => "de"
}
}
$ echo '13.07.2017 07:11:10' | logstash -f test.config
Settings: Default pipeline workers: 8
Pipeline main started
{
"message" => "13.07.2017 07:11:10",
"@version" => "1",
"@timestamp" => "2017-07-13T05:11:10.000Z",
"host" => "bertie"
}
Pipeline main has been shutdown
stopping pipeline {:id=>"main"}
If i reproduce your example, I get the same (correct) result. But only for manual execution.
Here my JSON-Output:
{
"_index": "filebeat-2017.07.18",
"_type": "test",
"_id": "AV1UtBnpzzBeDCd4SQD4",
"_score": null,
"_source": {
"HOST": "test.lan",
"PID": "20896",
"message": "13.07.2017 00:51:10 [20896] [INFO] [main::() line(63)] staging dirs:",
"tags": [
"_dateparsefailure"
],
"@timestamp": "2017-07-18T07:57:34.263Z",
"LOGMESSAGE": "staging dirs:",
"PRODUCT": "test",
"beat": {},
"SOURCE": "main::() line(63)",
"INPUT_TYPE": "log",
"FILE": "/tmp/test.log",
"LOGTIMESTAMP": "13.07.2017 00:51:10",
"TYPE": "test"
},
"fields": {
"@timestamp": [
1500364654263
]
},
"sort": [
1500364654263
]
}
I can't reproduce.
$ cat data
13.07.2017 00:51:10 [20896] [INFO] [main::() line(63)] staging dirs:
$ cat test.config
input { stdin { } }
output { stdout { codec => rubydebug } }
filter {
grok {
match => ["message", "%{DATESTAMP:LOGTIMESTAMP}"]
}
date {
match => ["LOGTIMESTAMP", "dd.MM.YYYY HH:mm:ss"]
locale => "de"
}
}
$ logstash -f test.config < data
Sending Logstash's logs to /home/magnus/logstash/logstash-5.4.1/logs which is now configured via log4j2.properties
[2017-07-18T15:56:07,689][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2017-07-18T15:56:07,715][INFO ][logstash.pipeline ] Pipeline main started
[2017-07-18T15:56:07,765][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
{
"@timestamp" => 2017-07-12T22:51:10.000Z,
"@version" => "1",
"host" => "bertie",
"message" => "13.07.2017 00:51:10 [20896] [INFO] [main::() line(63)] staging dirs:",
"LOGTIMESTAMP" => "13.07.2017 00:51:10"
}
[2017-07-18T15:56:10,733][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
I find it very hard to believe that the date filter is silent about why it's failing.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.