Help with simple grok filter for getting date

i have tried to get this working for a couple of days now. it seems so simple i must be missing something. I have read all the topics and tried to find examples to help me get it working, no luck. I just want to have the timestamp for events in an index be the time stamp in the file and not when logstash ingests it. Here is my filter:

input {
file {
path => "/app/buckets/bladelogic/**/.log"
stat_interval => 30
start_position => beginning
ignore_older => 0
}
}

filter {
grok {
match => {"message" => "%{SYSLOG5424SD}:time_stamp"}
}
date {
match => [ "time_stamp", "ISO8601" ]
remove_field => [ "time_stamp" ]
}
}

output {
elasticsearch {
hosts => ["1.2.3.4.5:9200","1.2.3.4.6:9200"]
index => "bladelogic"
user => pete
password => xxxxx
}
}

some sample events:

[21 Feb 2017 15:07:20,673] [Scheduled-System-Tasks-Thread-3] [INFO] [System:System:] [Memory Monitor] Total JVM (B): 1341128704,Free JVM (B): 882924056,Used JVM (B): 458204648,VSize (B): 7945830400,RSS (B): 1835081728,Used File Descriptors: 311,Used Work Item Threads: 1/50,Used Client Connections: 1/200,DB Client-Connection-Pool: 0/0/0/100/75/25,DB Job-Connection-Pool: 0/0/0/100/75/25,DB General-Connection-Pool: 2/2/0/100/75/25
[21 Feb 2017 15:08:20,674] [Scheduled-System-Tasks-Thread-19] [INFO] [System:System:] [Memory Monitor] Total JVM (B): 1341128704,Free JVM (B): 870601656,Used JVM (B): 470527048,VSize (B): 7945830400,RSS (B): 1835081728,Used File Descriptors: 303,Used Work Item Threads: 1/50,Used Client Connections: 1/200,DB Client-Connection-Pool: 0/0/0/100/75/25,DB Job-Connection-Pool: 0/0/0/100/75/25,DB General-Connection-Pool: 2/2/0/100/75/25
[21 Feb 2017 15:09:20,673] [Scheduled-System-Tasks-Thread-9] [INFO] [System:System:] [Memory Monitor] Total JVM (B): 1341128704,Free JVM (B): 864725088,Used JVM (B): 476403616,VSize (B): 7945830400,RSS (B): 1835081728,Used File Descriptors: 312,Used Work Item Threads: 1/50,Used Client Connections: 1/200,DB Client-Connection-Pool: 0/0/0/100/75/25,DB Job-Connection-Pool: 0/0/0/100/75/25,DB General-Connection-Pool: 1/1/0/100/75/25
[21 Feb 2017 15:10:20,675] [Scheduled-System-Tasks-Thread-13] [INFO] [System:System:] [Memory Monitor] Total JVM (B): 1341128704,Free JVM (B): 858982272,Used JVM (B): 482146432,VSize (B): 7945830400,RSS (B): 1835081728,Used File Descriptors: 303,Used Work Item Threads: 1/50,Used Client Connections: 1/200,DB Client-Connection-Pool: 0/0/0/100/75/25,DB Job-Connection-Pool: 0/0/0/100/75/25,DB General-Connection-Pool: 1/1/0/100/75/25

I get a grok parse failure when browsing events in kibana and no time_stamp field when i go to create the index or after the data has been indexed. Any ideas what I am missing?

Thanks,
Pete

using version 5.2.o for Logstash and Elastic search (full ELK stack)

Hello,

the first thing i see is the match is wrong, it should be:

match => {"message" => "%{SYSLOG5424SD:time_stamp}"}

second i couldnt see a pattern in github for "ISO8601" you might want to replace that with something:)

Thank you. I was under the assumption that the match was correct because i was testing every change i made to the conf file and was getting "Configuration OK"

CDLSTAP02:/home/virtual # /usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/bladelogic.conf --config.test_and_exit
Sending Logstash's logs to /app/logstash which is now configured via log4j2.properties
Configuration OK

the field is being parsed correctly and I see time_stamp as a field in the events, it is treating it as a string though, so i am not able to set it as the time-field name when creating the index. baby steps, thanks again.

Thank you. I was under the assumption that the match was correct because i was testing every change i made to the conf file and was getting "Configuration OK"

Your configuration was syntactically correct but semantically wrong and it's impossible for Logstash to detect that.

the field is being parsed correctly and I see time_stamp as a field in the events, it is treating it as a string though, so i am not able to set it as the time-field name when creating the index.

You're using ISO8601 as the date pattern but the timestamp in your logs isn't ISO8601. You're probably looking for something like "dd MMM yyyy HH:mm:ss,SSS".

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.