Timestamp is ingest timestamp, not event timestamp

I am using Filebeat to send single line ELFF formatted logs from a Bluecoat proxy to Logstash. I believe I am grokking out the timestamp field correctly from the message, but the timestamp in ElasticSearch for all these events is the ingest timestamp, not the timestamp from the event. Everything else seems to be working fine. I do not have any grok parse errors.

I'm new to this; I've worked through everything else, but this one has me stumped and Google isn't helping. Any suggestions?

Sample sanitized data and logstash config below.

Sample Data
2020-08-20 20:10:27 29 11.26.90.201 200 TCP_ACCELERATED 39 293 CONNECT tcp some.website.org 443 / - - - 11.12.13.14 - - "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" DENIED "Technology;Web Ads/Analytics" - 12.13.14.15 8080 10.20.30.40 1.2.3.4 - none - none - "none" "none" 2020-08-20 20:10:27 66 11.47.42.201 200 TCP_ACCELERATED 39 224 CONNECT tcp another.example.com 443 / - - - 13.14.15.16 - - "Apache-HttpClient/UNAVAILABLE (java 1.4)" OBSERVED "Business/Economy" - 12.13.14.15 8080 2.3.4.5 7.8.9.10 - none - none - "none" "none"

Logstash

filter {
	
	grok {
		match => { "message" => '%{TIMESTAMP_ISO8601:@timestamp} (%{NUMBER:bluecoat.time_taken}|\-) (%{IP:source.ip}|\-) (%{NUMBER:http.response.status_code}|\-) (%{WORD:bluecoat.action}|\-) (%{NUMBER:bluecoat.sc_bytes}|\-) (%{NUMBER:source.bytes}|\-) (%{WORD:http.request.method}|\-) (%{WORD:url.scheme}|\-) (%{URIHOST:destination.address}|\-) (%{NUMBER:destination.port}|\-) (%{URIPATH:url.path}|\-) (%{URIPARAM:url.query}|\-) (?<user.name>([a-zA-Z0-9\/\.\?\-\$]+)) (?<user.group.name>([a-zA-Z0-9\/\.\?\-\$]+)) (?:%{URIHOST:bluecoat.s_supplier_name}|%{IP:s-supplier-name}|\-) (?<http.request.mime_type>([a-zA-Z0-9\-\+\/\;\%\.\=]+)) (%{URI:http.request.referer}|\-) (%{QUOTEDSTRING:user_agent.original}|\-) (%{WORD:bluecoat.filter_result}|\-) (%{QUOTEDSTRING:bluecoat.categories}|\-) (?<bluecoat.virus_id>([a-zA-Z0-9\-\/.])) (%{IP:observer.ip}|\-) (%{NUMBER:bluecoat.observer.port}|\-) (?:%{URIHOST:bluecoat.forwarded_for}|%{IP:x-forwarded-for}|\-) (%{IP:destination.ip}|\-) (?<bluecoat.client_ssl_version>([a-zA-Z0-9\/\.\?\-\(\)\$]+)) (?<bluecoat.client_cipher>([a-zA-Z0-9\/\.\?\-\(\)\$]+)) (?<bluecoat.destination_ssl_version>([a-zA-Z0-9\/\.\?\-\(\)\$]+)) (?<bluecoat.destination_cipher>([a-zA-Z0-9\/\.\?\-\(\)\$]+)) (?<bluecoat.destination_certificate_errors>([a-zA-Z0-9\/\.\?\-\(\)\$]+)) (%{QUOTEDSTRING:bluecoat.application_name}|\-) (%{QUOTEDSTRING:bluecoat.application_operation}|\-)'}
    }
		
}

I'm not sure if it is possible to set the @timestamp field directly from grok, you should use the date filter to see if it will work.

Parse your date into another field in grok and use the date filter with this field, for example, change your grok to '%{TIMESTAMP_ISO8601:timestamp} and then use the date filter.

date {
    match => [ "timestamp", "yyyy-dd-MM HH:mm:ss"]
}

This will parse the value of your timestamp field into the event @timestamp field.

The issue is that the field already exists. The code does different things depending on the type of the existing field. If the existing field is a LogStash::Timestamp then it logs a DEBUG message saying it will leave it unmodified.

[2020-09-11T13:50:33,173][DEBUG][logstash.filters.grok    ][main][91fb85f2e5abc66c776cd383ec1ba5840bbc036e454a5930ae9c1db8f095cefe] Not adding matched value - found existing (LogStash::Timestamp) {:field=>"@timestamp", :value=>"2020-08-20 20:10:27"}

Now you might think that you could get around this by adding

overwrite => [ "@timestamp" ]

to your grok filter, but that does not work. logstash knows that @timestamp should be a LogStash::Timestamp and when grok tries to event.set it to a string it gets a WARN:

Grok regexp threw exception {:exception=>"wrong argument type String (expected LogStash::Timestamp)"

Change your grok filter to start with '%{TIMESTAMP_ISO8601:[@metadata][timestamp]} and parse that using a date filter.

date { match => [ "[@metadata][timestamp]", "YYYY-MM-dd HH:mm:ss" ] }
1 Like

Thanks guys. I'll give that a shot.

Follow up question: Is this because I should be doing something in the FileBeat Log input to keep this from happening?

I do not think you can work around this in FileBeat.

Badger, that worked. Thank you!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.