I'm a bit confused right now. But let's try to tidy up the chaos in my head: Something happened at 12:37 local time and was imported at 13:00, so 02:37 and 03:00 UTC because Sydney is UTC+10. It's now showing up as 02:37 local time instead which would mean that at some point in your pipeline UTC was interpreted as Sydney time and the JSON representation of your event with the UTC dates says that createdon is 16:37 the previous day while @timestamp is 03:37 today? Is that right?
Hi @Badger I will be using that setting and updating the thread with results soon.
Meanwhile, I have realized that my logstash on linux box is of version 7.7.0 while the ES instance in cloud is 7.7.1. Maybe that might be the reason.
Otherwise I see that the createdon field is different for events which happened within an hour difference.
In database explorer I exported the results as csv and opened in notpad++ to get the real data.
The format it is coming in is:
There is no trailing Z at the end of the timestamps. Is this something I can look into to process explicitly in Elasticsearch?
I can use logstash filters to add zone at the end of the timestamp though I am a bit cagey about it and DST in particular.
Sorry these are not the exact events whose timestamp I took out. But I think it is enough to show the difference.
If you need further help to find the right timezone settings, it might be helpful to set up a pipeline that only consists of your JDBC input and a rubydebug output and post the results. The Z won't be necessary if Logstash is told which timezone to expect. Your target should be a correct Logstash Timestamp object (that uses UTC), not a specific string format for the date.