[Still Not Solved!] Filebeat cannot recognize timezone in syslog

Nops. I think my @timestamp field is in UTC+8, as it's same to my local timestamp.

Okay, so if the value in @timestamp field is in UTC+8, there's your problem :slight_smile:
It should be in UTC. Then Kibana will convert it from UTC to UTC+8 (in your case) on the fly.

How have you configured filebeat to ingest Elasticsearch logs?

If you mean pre-process, no :joy: I haven't yet.

No, I mean how is the filebeat configured?

I use modules enable command to enable it from the command-line.

Like filebeat modules enable system apache elasticsearch kibana mysql.

I only edit the output.elasticsearch and setup.kibana part in filebeat.yml

Okay, then I don't think I can help you any further. I have never used any modules.
Make sure you are running the latest version of Filebeat, in case of a fixed bug.

I think it's not a problem of modules.

I use local time on my server, I think I can only change my server timezone to UTC to solve this problem.

I will keep trying more methods to solve the timezone problem. Thanks a lot for your help!

hope anyone can help me...

I've just updated my configuration. Could you please have a look?

I think you are right: https://github.com/elastic/beats/issues/9756

Didn’t the convert timezone option fix the issue?

When the timestamps in the original files appear in local time, then the setting var.convert_timezone: true is needed in the module configuration.

Not all modules support this setting, so check the module documentation for that. System module supports it for sure, but I don't see it in Elasticsearch module documentation (although I believe it's available based on https://github.com/elastic/beats/pull/9761).

Anyway, the setting works by adding the timezone information during indexing via an ingest pipeline, so whenever you change this setting, it's very important that you run filebeat once to recreate the ingest pipelines in Elasticsearch, so the new timezone information will be considered during indexing.

To do that you should run something like ./filebeat setup --pipelines --modules system. The pipeline code should be different when convert_timezone is set to true or when it's not set (defaults to false).
More info here: https://www.elastic.co/guide/en/logstash/current/use-ingest-pipelines.html

Take a look at your ingest pipelines (GET _ingest/pipeline). If the pipeline is correct there then it might be a bug, but I would check that first (because for system module I'm almost sure that it works fine).

One way to ensure/check that the pipeline is recreated is deleting it first from Elasticsearch and then restarting Filebeat or running the command I shared before and see in Elasticsearch if the pipeline is created again properly.

Hope it helps!

no, it still not works

I tried to enable var.convert_timezone: true in {conf_path}/module.d/system as I said in the question. And I thought it works.

Then I filtered the data from filebeat and found that some logs are in correct timestamp, some are not.

The logs in apache, mysql, and elastic server have a correct @timestamp value. While the logs in syslog, elasticsearch.log, es_deprecation and es_gc are incorrect.

I've also checked my _ingest/pipeline and I don't think there's any problem in the syslog-pipeline.

"filebeat-7.2.0-system-syslog-pipeline" : {
    "processors" : [
      {
        "grok" : {
          "field" : "message",
          "patterns" : [
            """%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}""",
            "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}",
            """%{TIMESTAMP_ISO8601:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}"""
          ],
          "pattern_definitions" : {
            "GREEDYMULTILINE" : "(.|\n)*"
          },
          "ignore_missing" : true
        }
      },
      {
        "remove" : {
          "field" : "message"
        }
      },
      {
        "rename" : {
          "field" : "system.syslog.message",
          "target_field" : "message",
          "ignore_missing" : true
        }
      },
      {
        "date" : {
          "field" : "system.syslog.timestamp",
          "target_field" : "@timestamp",
          "formats" : [
            "MMM  d HH:mm:ss",
            "MMM dd HH:mm:ss",
            "ISO8601"
          ],
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "system.syslog.timestamp"
        }
      }
    ],
    "on_failure" : [
      {
        "set" : {
          "field" : "error.message",
          "value" : "{{ _ingest.on_failure_message }}"
        }
      }
    ],
    "description" : "Pipeline for parsing Syslog messages."
  }

That pipeline is wrong, it doesn't include the timezone information.
Delete that pipeline and just restart filebeat (don't run the ./filebeat setup command...).

If the pipeline is not created run filebeat adding this parameter: -E filebeat.overwrite_pipelines=true.

Share the new pipeline as soon as it's created. If the var.convert_timezone from your config is not ignored, your pipeline should have something like:

    "date": {
          "timezone": "{{ beat.timezone }}",
...
...

I faced once this bug: https://github.com/elastic/beats/issues/9747, where the manual command ignored the timezone setting from the module configuration, but I suppose this should have been fixed in the latest release...

I have deleted filebeat-7.2.0-system-syslog-pipeline and filebeat-7.2.0-system-auth-pipeline

Then I ran filebeat using the configuration like this:

/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat -E filebeat.overwrite_pipelines=true

Then I use GET /_ingest/pipeline?pretty, but I couldn't find any new pipeline created.

The whole pipeline list after restarting the filebeat is here.

well... my apology, the new pipeline has been created, and it has a timezone property!
I looked further and found that only these pipelines have the timezone property:

filebeat-7.2.0-system-syslog-pipeline (recreaeted)
filebeat-7.2.0-system-auth-pipeline (recreated)
filebeat-7.2.0-elasticsearch-server-pipeline
filebeat-7.2.0-elasticsearch-slowlog-pipeline
filebeat-7.2.0-elasticsearch-audit-pipeline

Is that normal?

Unluckily, the @timestamp in syslogs are still incorrect... The timezone configuration in syslog now is like:

"timezone" : "{{ event.timezone }}"

There's other developers facing the same problems as mine! I hope you could figure out the problem and fix it. It will be so helpful.

Hello,

I have the same problem on Filebeat's elasticsearch module but it work's fine for the system module.
Let me share what i have done to fix it. However, my ELK stack is on development and not for production, hence following my procedure might cause loss of data.

My server timezone is in UTC +8:00 (Singapore)
Enabling var.convert_timezone: true converts my server timezone into UTC, which means - 8 hours.
On Kibana, Timezone for date formatting was left as default where it reads the timezone of the browser and +8 hours back from UTC.
Hence, showing the correct timing.

Re-indexing and Recreating Pipeline

  1. Stop Filebeat on all instance that utilize the old pipeline

  2. $ systemctl stop filebeat

  3. Enable UTC time conversion in system.yml

  4. Delete pipelines

  5. $ curl -XDELETE 'http://esnode1:9200/_ingest/pipeline/filebeat-*'

  6. Delete Related Indicies @ Index Management

  7. Since the logs shown in kibana are in wrong time, we should reindex all the logs

  8. In kibana, navigate to management > index management > select the index that stores the server logs

  9. Delete it

  10. Recreate the index

  11. $ filebeat setup -e

  12. $ filebeat setup --pipelines -modules="system"

  13. Restart filebeat

  14. $ systemctl start filebeat

  15. Observer logs tab on kibana and it should show logs with the correct timing.

Alright! I'll have a try. I have the problem on system as well as elasticsearch LOL.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.