Why date event in Elasticsearch logs are not in UTC

Hello,
I recently upgraded my installation to Elasticsearch 6.1.1 and noticed a slight issue with the indexed data in Kibana.
If I choose the 'last 15 min' filter I am not able to see the data. However if I choose 'Today', I see the indexed events one hour later.
I think that the problem comes from the timestamp found in the elasticsearch log files.
The date is in ISO8601 format but not in UTC. The line is then read by filebeat (with the elasticsearch module enabled) and sent to Elasticsearch.
It goes through the ingest pipeline. The pipeline just parse the message field and use a date processor:
{
"date" : {
"formats" : [
"ISO8601"
],
"ignore_failure" : true,
"field" : "elasticsearch.server.timestamp",
"target_field" : "@timestamp"
}
},

but the processor probably assumes that the timestamp is already in UTC. It explains why I get a shift of one hour.
I tried to change (with success) the log4j2.properties by adding {UTC} in the pattern of each rollingfile appender: %d{ISO8601}{UTC}

but I suppose that it is not the good approach.
I could also modify the existing pipeline by adding a timezone parameter...

So any help would be appreciated :slight_smile:

Regards,
Pascal

@Pascal72 would you be able to provide some log samples?
Do I understand correctly, you ingest logs from one ES instance into second ES instance and then visualise these on Kibana connected to second ES instance?

@Pascal72 you could also try to configure your filebeat module with var.convert_timezone and that would mean that filebeat would parse the date with system time zone as well.

https://www.elastic.co/guide/en/beats/filebeat/6.1/filebeat-module-system.html

since ES 7.0 you should expect time zone to be present on timestamp in JSON logs.

Here is a sample.
In my log file:
[2019-03-11T16:29:18,502][INFO ][o.e.c.m.MetaDataDeleteIndexService] [xxxxxx] [project.346bdccd-d612-11e8-a907-00505693238b.2019.03.09/dQd_YJXXSqu5NRcj3EDTLA] deleting index

and once indexed:
{
"_index": "filebeat-6.6.1-2019.03.11",
"_type": "doc",
"_id": "d55fbWkBMSPaBqwwcjdw",
"_version": 1,
"_score": null,
"_source": {
"offset": 5995513,
"log": {
"file": {
"path": "/var/log/elasticsearch/es-cluster.log"
},
"level": "INFO"
},
"prospector": {
"type": "log"
},
"source": "/var/log/elasticsearch/es-cluster.log",
"fileset": {
"module": "elasticsearch",
"name": "server"
},
"message": "deleting index",
"input": {
"type": "log"
},
"@timestamp": "2019-03-11T16:29:18.502Z",
"elasticsearch": {
"server": {
"component": "o.e.c.m.MetaDataDeleteIndexService"
},
"node": {
"name": "xxxx"
},
"index": {
"name": "project.346bdccd-d612-11e8-a907-00505693238b.2019.03.09",
"id": "dQd_YJXXSqu5NRcj3EDTLA"
}
},
"service": {
"name": "elasticsearch"
},
"host": {
"name": "xxxxx"
},
"beat": {
"hostname": "xxxx",
"name": "xxxxx",
"version": "6.6.1"
},
"event": {
"created": "2019-03-11T15:29:24.946Z",
"dataset": "elasticsearch.server"
}
},
"fields": {
"@timestamp": [
"2019-03-11T16:29:18.502Z"
],
"event.created": [
"2019-03-11T15:29:24.946Z"
]
}
}

16:29:24 is the local time (luxembourg)
From my understanding, it should index to 15:29:24 and in Kibana, through the browser interface, it should display 16:29:24 and not 17:29:24

yes I also tried to play with the var.convert_timezone option but I don't see any changes.

Apparently, this option triggers the add_locale processor:
{{ if .convert_timezone }}
processors:

  • add_locale: ~
    {{ end }}

and if I check the documentation, I see:
The add_locale processor enriches each event with the machine’s time zone offset from UTC or with the name of the time zone. It supports one configuration option named format that controls whether an offset or time zone abbreviation is added to the event. The default format is offset. The processor adds the a beat.timezone value to each event.

it enriches the event but it doesn't transform the timestamp in UTC.

Am I right ?

Hi @Pascal72,

Two questions:

  1. What version of Filebeat are you running? In the original post you said 6.1.1 but from looking at the sample output in a later post it looks like 6.6.1. I'm guessing it's 6.6.1 but do you mind confirming that?

  2. Could you make the following Elasticsearch API call and post the output here please?

    GET _ingest/pipeline/filebeat-*elasticsearch*server*
    

Thanks,

Shaunak

Hi @shaunak,

Sorry it is a type error, I am in 6.6.1.

Here is the result of the command:

{
  "filebeat-6.6.1-elasticsearch-server-pipeline" : {
    "on_failure" : [
      {
        "set" : {
          "field" : "error.message",
          "value" : "{{ _ingest.on_failure_message }}"
        }
      }
    ],
    "processors" : [
      {
        "rename" : {
          "target_field" : "event.created",
          "field" : "@timestamp"
        }
      },
      {
        "grok" : {
          "field" : "message",
          "pattern_definitions" : {
            "GREEDYMULTILINE" : "(.|\n)*",
            "INDEXNAME" : "[a-zA-Z0-9_.-]*"
          },
          "patterns" : [
            """\[%{TIMESTAMP_ISO8601:elasticsearch.server.timestamp}\]\[%{LOGLEVEL:log.level}%{SPACE}?\]\[%{DATA:elasticsearch.server.component}%{SPACE}\](%{SPACE})?(\[%{DATA:elasticsearch.node.name}\])?(%{SPACE})?(\[gc\](\[young\]\[%{NUMBER:elasticsearch.server.gc.young.one}\]\[%{NUMBER:elasticsearch.server.gc.young.two}\]|\[%{NUMBER:elasticsearch.server.gc_overhead}\]))?%{SPACE}((\[%{INDEXNAME:elasticsearch.index.name}\]|\[%{INDEXNAME:elasticsearch.index.name}\/%{DATA:elasticsearch.index.id}\]))?%{SPACE}%{GREEDYMULTILINE:message}"""
          ]
        }
      },
      {
        "date" : {
          "formats" : [
            "ISO8601"
          ],
          "ignore_failure" : true,
          "field" : "elasticsearch.server.timestamp",
          "target_field" : "@timestamp"
        }
      },
      {
        "remove" : {
          "field" : "elasticsearch.server.timestamp"
        }
      }
    ],
    "description" : "Pipeline for parsing the Elasticsearch server log file."
  }
}

Hi @Pascal72,

Regrettably I'm afraid you are running into this bug in 6.6.1: https://github.com/elastic/beats/issues/11055

We will be releasing 6.6.2 shortly in which this bug is fixed. After that you should be able to use the convert_timezone option as expected.

Sorry about this bug! :frowning_face:

Shaunak

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.