@timestamp shows indexed date/time and not time from log entry

Being very new to ELK I have borrowed the following logstash config which seems to work in the main. Annoyingly though the @timestamp values are not the same as the values in the WebSphere logs but the date/time in which the index rebuilt. How do I get the @timestamp to show the actual date/time as it does for timestamp.

Incidentally, timestamp shows %{tz_num} appended which I would like to remove

Here is the config

input {
file {
path => [ "/opt/logs/SystemOut*.log" ]
start_position => "beginning"
type => "websphere"
# important! logstash read only logs from files touched the last 24 hours
# 8640000 = 100 days
sincedb_path => "/dev/null"
ignore_older => "8640000"
}
}
filter {
if [type] =~ "websphere" {
grok {
match => ["source", "%{GREEDYDATA}/%{GREEDYDATA:server_name}/SystemOut.log"]
}
grok {
match => ["message", "[%{DATA:wastimestamp} %{WORD:tz}] %{BASE16NUM:was_threadID} (?<was_shortname>\b[A-Za-z0-9$]{2,}\b) %{SPACE}%{WORD:was_loglevel}%{SPACE} %{GREEDYDATA:was_msg}"]
}
grok {
match => ["was_msg", "(?<was_errcode>[A-Z0-9]{9,10})[:,\s\s]%{GREEDYDATA:was_msg}"]
overwrite => [ "was_msg" ]
tag_on_failure =>
}
translate {
field => "tz"
destination => "tz_num"
dictionary => [
"CET", "+0100",
"CEST", "+0200",
"EDT", "-0400"
]
}
translate {
field => "was_errcode"
destination => "was_application"
regex => "true"
exact => "true"
dictionary => [
"CLFRW", "Search",
"CLFRA", "Activities",
"CLFRS", "Blogs",
"CLFRL", "Bookmarks",
"CLFRK", "Common",
"CLFRM", "Communities",
"EJPVJ", "Files",
"CLFRV", "Forums",
"CLFRQ", "Homepage",
"CLFRP", "Installer",
"CLFRO", "Configuration",
"CLFRR", "Notifications",
"CLFNF", "Portlet",
"CLFRT", "FedSearch",
"CLFWX", "News",
"CLFWY", "Event",
"CLFWZ", "Widget",
"CLFRN", "Profiles",
"CLFWY", "User",
"EJPIC", "Portal",
"EJPVJ", "Wikis",
"ADMS", "Websphere",
"SECJ", "Security"
]
}
mutate {
replace => ['timestamp', '%{wastimestamp} %{tz_num}']
}
date{
match => ["timestamp", "MM/dd/YY HH:mm:ss:SSS Z", "M/d/YY HH:mm:ss:SSS Z"]
tag_on_failure =>
}
mutate {
remove_field => [ 'tz', 'tz_num', 'wastimestamp' ]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
stdout { codec => rubydebug }
}

I have tried changing it to the following but that produces the same results.

}
date{
match => ["timestamp", "MM/dd/YY HH:mm:ss:SSS Z", "M/d/YY HH:mm:ss:SSS Z"]
target => "@timestamp"
tag_on_failure =>
}

Here is an example from logstash

{
"was_loglevel" => "I",
"was_msg" => " The trace state has changed. The new trace state is *=info.",
"message" => "[2/7/17 7:02:28:564 GMT] 000001b1 ManagerAdmin I TRAS0018I: The trace state has changed. The new trace state is *=info.",
"type" => "websphere",
"was_shortname" => "ManagerAdmin",
"tags" => [
[0] "_grokparsefailure"
],
"was_threadID" => "000001b1",
"path" => "/opt/logs/SystemOut.log",
"@timestamp" => 2017-02-09T07:59:23.656Z,
"@version" => "1",
"host" => "f2730df8227c",
"was_errcode" => "TRAS0018I",
"timestamp" => "2/7/17 7:02:28:564 %{tz_num}"
}

I'm pretty new to this so please would you mind explaining whilst I'm trying to learn the format of this file and what can be manipulated to what result?

Thanks in advance

GMT is not included in your timezone translation table, hence you're getting "%{tz_num}" in your timestamp field. Fixing that might be the only thing needed. From a very quick look the rest looks okay.

Great minds! I added that after posting my message

}
translate {
field => "tz"
destination => "tz_num"
dictionary => [
"CET", "+0100",
"CEST", "+0200",
"EDT", "-0400",
"GMT", "+0000"
]
}

In the output from logstash I see that tnz_num has been removed and replaced with 0000 which looks better. Also, the @timestamp value matches!

{
"was_loglevel" => "I",
"was_msg" => " The trace state has changed. The new trace state is *=info.",
"message" => "[2/7/17 7:02:28:564 GMT] 000001b1 ManagerAdmin I TRAS0018I: The trace state has changed. The new trace state is *=info.",
"type" => "websphere",
"was_shortname" => "ManagerAdmin",
"tags" => [
[0] "_grokparsefailure"
],
"was_threadID" => "000001b1",
"path" => "/opt/logs/SystemOut.log",
"@timestamp" => 2017-02-07T07:02:28.564Z,
"@version" => "1",
"host" => "f2730df8227c",
"was_errcode" => "TRAS0018I",
"timestamp" => "2/7/17 7:02:28:564 +0000"
}

I deleted all the .sinceDB_ files and then stopped logstash (in a Docker container) and started the container and that is when I see the matches with the date and time.

In Kibana the values do not tally. I wonder if it is an index thing though I have specified

sincedb_path => "/dev/null"

Many thanks for the quick reply

OK I may have found something to delete the Kibana index too http://stackoverflow.com/questions/33820068/why-is-that-after-deleting-an-index-in-logstash-kibana-still-displays-it

Now I've blown away the index and all data and restarted Kibana and Logstash I can create a new index and the dates and times look correct.

curl -XDELETE http://elasticsearch:9200/.kibana
curl -XDELETE http://elasticsearch:9200/*

Thanks for your help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.