I have some cassandra logs that use this custom pattern for logging
MILLISECOND (\d{3})
JAVALOGBACKTIMESTAMP %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:%{MINUTE}:%{SECOND},%{MILLISECOND}
CASS_BASE ^%{LOGLEVEL:level} \[(%{DATA:process}:%{INT:threadId}|%{DATA:process})\] %{JAVALOGBACKTIMESTAMP:timestamp} %{WORD:java_file}.java:%{INT:line_number} -
FLUSHSIZE %{BASE10NUM}(KiB|GiB|MiB)
CASS_DEFAULT %{CASS_BASE} %{GREEDYDATA:message}
and it gets logged as the following in kibana
{
"_index": "services-2019.05.30",
"_type": "doc",
"_id": "9PUtCWsBuizMRyGPkPJw",
"_version": 1,
"_score": null,
"_source": {
"source": "/opt/cassandra/logs/system.log",
"java_file": "SSLFactory",
"@version": "1",
"index_prefix": "services",
"logsource": "cassandra",
"@timestamp": "2019-05-30T14:38:24.124Z",
"timestamp": "2019-05-30 14:38:17,591",
"beat": {
"version": "6.2.1",
"name": "myname",
"hostname": "myhostname"
},
"tags": [
"grokked",
"leveled"
],
"message": "Filtering out [TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] as it isn't supported by the socket",
"line_number": "221",
"level": "WARNING",
"profiles": "myprofile",
"offset": 6779433,
"logtype": "service",
"process": "epollEventLoopGroup-2-1"
},
"fields": {
"@timestamp": [
"2019-05-30T14:38:24.124Z"
]
},
"highlight": {
"logsource": [
"@kibana-highlighted-field@cassandra@/kibana-highlighted-field@"
]
},
"sort": [
1559227104124
]
}
and my logstash config is the following
if "cassandra" in [logsource] {
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => ["message", "%{CASS_DEFAULT}"]
overwrite => [ "message" ]
add_tag => ["grokked"]
}
}
If i am setting the event field to be timestamp for the log timestamp, logstash should automatically set that as @timestamp right? Why is it not doing that, do i need a date filter for all logs to set @timestamp with the timestamp of the log file?
thanks