Can anyone please explain me the time difference between the json view and the table view?

Hi Team,

I am using an elastic cloud account. My application is sending the data with timestamp in Sydney timezone. I have configured the same in kibana as well. I am almost seeing 11 hours difference between time stamp in json view and the timestamp in table view.

timestamp in json view: 2023-10-12 12:13:13
timestamp in table view: 2023-10-12 23:13:13

The json view is how the document is stored in Elasticsearch and all date and time fields are stored as UTC, so you will see the date and time as UTC.

The table view is how Kibana shows the document and Kibana will convert all UTC date and time to the time zone of the browser.

Also, the timestamp in the json view has a Z in the end, this means that the date and time is in UTC, so you do not have 2023-10-12 12:13:13 in the json view you will have something like 2023-10-12T12:13:13.000Z.

So 2023-10-12T12:13:13.000Z in json view and 2023-10-12 12:13:13 in table view in Kibana are the same time, the Kibana table view has an offset of the +11.

Hi @leandrojmp , Thanks for the reply.

I have configured Sydney timezone in kibana. So I think the time in json should be converted into the Sydney timezone. what if I am getting the data already in Sydney timezone from the logs and kibana pushes the logs again forward in time. Can you please give me a solution to not to change anything and show the time as it is?

Note: I am using timestamp field while creating dataview, not @timestamp

Please check below details.
raw timestamp:
image

After converting, Z is added to the timestamp at the end but time in table view is way ahead as displayed in the image below.


This won't happen, if the field is a date field and has a date mapping or Elasticsearch maps it as a date field, it will be in UTC, date time fields in Elasticsearch are in UTC and this cannot be changed, since the json view shows the field as they are stored in Elasticseach, they will be show in UTC.

If your date time string is in Sidney timezone but do not have this information on it, then you need to tell Elasticsearch while ingesting that the time is not in UTC, if you do not give the timezone information it will be interprted as being in UTC.

For example, if you have a date string like this: 2023-10-12 12:13:13 and this is already in UTC, you need to tell Elasticsearch that this date string has an offset, if there is no information about the offset then it will be interpreted as an UTC time.

How you would do this depends on how you are ingesting your data, if you are using Logstash or some Ingest pipeline for example, both have a date filter that you need to use to convert the data to UTC.

This means that you are not informing Elasticsearch that your date string has an offset, this is a pretty common issue.

As mentioned, Elasticsearch will interprte all date string its receive as being in UTC, if they are not in UTC they need to have the offset information on it, if they do not have the offset information you nee to use a date filter to convert to UTC.

Both logstash and ingest processors have date filters that you can use to convert the date to UTC.

Hi @leandrojmp ,

If it converts the time in UTC, There shouldn't be 11 hour difference as I am seeing only 1 hour difference between UTC and Sydney timezone.

I'm using apache Nifi to forward the logs into Elasticsearch. where can I mention the offset to represent Sydney timezone?

Not sure what you are seeing, but when I say converts the time to UTC it means it will interpret the time as UTC, if the date string does not have any offset, no offset will be added.

Can you share the following:

  • A sample of your log before ingesting to Elasticsearch
  • A sample of the same log in Elasticsearch showing how the date field is stored (the json view in Kibana)

But for example, imagine the your original logs has the following date string on a field:

timestamp: "2023-10-13 10:00:00"

This date string does not have any information about the timezone in which it was generated, it is impossible to know if this date string has a timezone offset or not.

When Elasticsearch receives this data and if the mapping is correct, like the field was mapped as a date field or elasticsearch was able to infer that the field is a date field, this date string will be interpreted as an UTC date, so you will end up with something like this:

timestamp: "2023-10-13T10:00:00.000Z"

Now, if you go to Kibana and your Kibana timezone is using the Sidney timezone, then this UTC date will be converted to it and you will see 2023-10-13 21:00:00.000.

The problem will be if your original date string was generated with an offset without having the offset in the date string.

Is this time 2023-10-13 10:00:00 is the same as ``2023-10-13 10:00:00+1100`, but do not have the offset in the string, then you will have issues because elasticsearch will not know about the offset and consider it an UTC time and kibana will add another offset as well.

To solve this you have two options:

  • Change your logs to have the offset information in the date string
  • Use a date filter during the ingestion of the data to inform about the offset

Never used NiFi, but you can use an Ingest Pipeline in Elasticsearch with the date filter to parse your date correctly, how you will do that in NiFi I'm not sure.

You can however add a setting to your index template to tell elasticsearch to always run a specific ingest pipeline.

Adding this to your template index settings will tell Elasticsearch to always run this pipeline:

"index.final_pipeline: "your-pipeline-name"

Thanks @leandrojmp . It worked when I send the logs manually into an index by following your steps. I have to try it using the automated way.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.