Timestamp timezone


That there is problem.

Maybe if you only work on one timezone, or on each timezone separately, it's not a problem, but, for us that work in various timezones and pull data form them to a central location, this means we have to manually add a timezone to each location, according to that location.

This problem can be solved, if we can set the timezones in the beat/logstash yamels,
(These can be set automatically as part of the deployment scripts)
Or, even better, have them simply use the timezone of the system they run on.

I get that it's an easier time for devs to not take TZ into account, but this makes administration more difficult.

How can the TZ of beats/logstash be adjusted?

Can you give a little context about what is your issue? it is not clear.

What timezone you want to adjust? How are you indexing data? Using a beat directly to Elasticsearch or using logstash as well?

The @timestamp created by logstash is in UTC and you can't change that, all the times in Elasticsearch are in UTC as well and you can't also change that.


Both the beats timestamp, and the logstash timestamp, are a problem.
Is there a plan to change this?
I can see I'm not the only one mentioning this is a problem...

There is a central system that queries these logs and is based in a different time zone,
Yet alerts need to be displayed with local time zone...

This won't change, Elasticsearch stores every date/time field in UTC, the conversion from and to the localtime needs to be done by the application.

For example, if you are using Kibana, it will convert from UTC to the browser timezone, and this can be changed in Kibana to convert for other timezones as well.

If you are indexing with Logasth you can convert your localtime to UTC using the date filter.

But in the end all the date/time will be in UTC, this won't change, your systems need convert from and to localtime or work with UTC.

Take a look at the General Recommendations on this blog post about centralized logging with Elasticsearch.

You can use the Beats add_locale processor to add the timezone of the local machine to the event. This can be paired with the Logstash date processor to interpret dates according to the origin machine's timezone.

1 Like

I understand, but, I'm not marking this as an answer because I think this is simply conceptually wrong.

I shouldn't need to:
A: Add a field that increases space consumption, or,
B: Change the way everything on end works to UTC.

I think you are just dumping the problem on the user.

I don't think this is wrong, specially when you are working with centralized logging.

When you need to collect logs from log sources in different time zones you need to be able to know the exact time an event happened because this will influence the analysis of your logs.

For example, assume you have a server in Lisbon, Portugal and another one in Berlin, Germany, both are logging in local time and your central system does not use UTC to store the logs.

If your Berlin server logs an event at 2022-02-23 14:00 and your Lisbon server logs an event at 2022-02-23 13:30, can you easily tell which one happened first when looking at the logs during an analysis or something? Imagine someone in New York that need to do some analysis on those logs without having any timezone information, how would this work?

Even when using timezones in your logs they can be confusing, Imagine that you have 2 servers that logs an event at 2022-02-23 15:00 IST, without knowing the location of each server you can not guarantee that this is the same time because IST is a time zone abbrevation that can be used for at least three completely different places and this happen with other abbrevations as well.

Working with time zones is very tricky and can be a mess, the easiest way to help fix some of those problems is to agree on a central time and convert the time zones to that time and from that time, so UTC is used for all time in Elasticsearch and I don't think that Elastic will change it someday.

What the user needs to do is to make sure that the local dates have the correct offset from UTC before sending it to Elasticsearch, there are ways to easily do that in both filebeat and logstash.

If this is an issue to you, give more context and explain what is the issue and maybe someone can give you a solution.


You are right, if it's the same system, across multiple locations.
Our case is that of different systems in different locations.
There is no correlation between faults in different systems. They are 100% disconnected.

Getting the local time on the alarm helps the tier 1 tech know at what local time the problem happened, and what local time timestamps to look at if they check the local ELK logs (which display with local time in Kibana).

Not to mention it makes human compiling of incident reports that little bit easier afterwards.
I've seen cases where discussed timezones were not clear and it made the process a lot more difficult than it had to be. Humans make mistakes.

It makes tier's (which are the least technically savvy) job easier.

I'm sorry, but I still can not see what is the issue you are having with the timestamps and timezones.

Are you receiving alerts with wrong time? What are you using to alert?

As already said before, Elasticsearch stores all the date times in UTC and this cannot be changed, also, the fields created by the date filter in Logstash and the @timestamp field will always be in UTC as well.

But nothing of this should be a problem to create alerts and visualizations in local time if your dates and times fields have the timezone information.

In my case for example, I collect logs from different sources in different locations, but since all the date fields have the timezone information, I receive the alerts based on the exact time they happened and I can look the logs with the exact time they happened as Kibana will convert it from UTC.

Now if your issue is that you want to show the time not based on the local time of the user looking at Kibana for example, but based on the time of the location, which could be different, maybe there is some ways to adjust that.

If you provide more context about the issue that different timestamps and timezones are causing to your use case, maybe someone can point in to a solution.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.