Filebeat system module auth log timezone conversion problem

Greetings, we're VERY new to the Elastic stack. What we've got working so far running on Oracle Linux 7 (RHEL7 clone) using the 6.3 (latest) versions installed via YUM.

filebeat-6.3.1-1.x86_64

All our systems are configured to run in local time (MDT) not UTC.

filebeat is configured with the 'system' module enabled for output.elasticsearch and we've confirmed that we're getting data (and dashboard objects) in Kibana. Initial setup was fine, but we noticed all of our data in Kibana showed up -6 hours in the past (which matches our Timezone offset). So I did a fair amount of my own research before finally arriving at the 'system' module config file and the 'var.convert_timezone' parameter. We set this to 'true', and got a partial solution.

All the data from /var/log/messages (defined via syslog) now gets stuffed correctly and shows up in Kibana at the correct time, but all the data harvested from /var/log/secure (defined via auth) still shows up 6 hours in the past.

I could really use some help tracking down where in the module (/usr/share/filebeat/module/system) to change to get the timezone conversion working for both components of this module.

Thanks!

Our: /etc/filebeat/modules.d/system.yml

- module: system
  # Syslog
  syslog:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    var.convert_timezone: true

  # Authorization logs
  auth:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    var.convert_timezone: true

# end

Could you check both your log files to see what the time zone of the timestamps is? Please also share some examples.

Ruflin-

All the timestamps in our local logs are in local (MDT) time.

I've included examples from both below:

Here's a log entry from /var/log/secure:

Jul 16 13:46:37 isxxxxxx sshd[435]: pam_unix(sshd:session): session opened for user user by (uid=0)

Here's the result as it appears in Kibana:

{
  "_index": "filebeat-6.3.1-2018.07.16",
  "_type": "doc",
  "_id": "bpOhpGQBj3x1MQXbgVba",
  "_version": 1,
  "_score": null,
  "_source": {
    "offset": 6487,
    "prospector": {
      "type": "log"
    },
    "source": "/var/log/secure",
    "fileset": {
      "module": "system",
      "name": "auth"
    },
    "input": {
      "type": "log"
    },
    "@timestamp": "2018-07-16T13:46:37.000Z",
    "system": {
      "auth": {
        "hostname": "isxxxxxx",
        "pid": "435",
        "program": "sshd",
        "message": "pam_unix(sshd:session): session opened for user user by (uid=0)",
        "timestamp": "Jul 16 13:46:37"
      }
    },
    "beat": {
      "hostname": "isxxxxxx....",
      "timezone": "-06:00",
      "name": "isxxxxxx...",
      "version": "6.3.1"
    },
    "host": {
      "name": "isxxxxxx...."
    }
  },
  "fields": {
    "@timestamp": [
      "2018-07-16T13:46:37.000Z"
    ]
  },
  "sort": [
    1531748797000
  ]
}

From /var/log/messages

Jul 16 13:50:01 isxxxxxx systemd: Created slice User Slice of root.

{
  "_index": "filebeat-6.3.1-2018.07.16",
  "_type": "doc",
  "_id": "BZOkpGQBj3x1MQXboljO",
  "_version": 1,
  "_score": null,
  "_source": {
    "offset": 4824020,
    "prospector": {
      "type": "log"
    },
    "source": "/var/log/messages",
    "fileset": {
      "module": "system",
      "name": "syslog"
    },
    "input": {
      "type": "log"
    },
    "@timestamp": "2018-07-16T13:50:01.000-06:00",
    "system": {
      "syslog": {
        "hostname": "isxxxxxx",
        "program": "systemd",
        "message": "Created slice User Slice of root.",
        "timestamp": "Jul 16 13:50:01"
      }
    },
    "beat": {
      "hostname": "isxxxxxx...",
      "timezone": "-06:00",
      "name": "isxxxxx...",
      "version": "6.3.1"
    },
    "host": {
      "name": "isxxxxx...."
    }
  },
  "fields": {
    "@timestamp": [
      "2018-07-16T19:50:01.000Z"
    ]
  },
  "sort": [
    1531770601000
  ]
}

Did you overwrite the pipeline after you made the changes with the time zone conversion? Could it be that you tested it with syslog first and it worked and then only added auth and the pipeline was not overwritten?

I believe that I did overwrite the pipeline with:

systemctl stop filebeat
curl -XDELETE 'http://localhost:9200/filebeat-*'
systemctl start filebeat

However,as we're pretty new to this, if I've missed a step, I'm happy to be corrected.

--
Ray Frush

Hmmm, I suspect that the pipeline was not deleted with that command since it lives under /_ingest not /filebeat-* in ES.

Check out the Ingest APIs, you'll probably find the old pipeline in there, and can delete it.

Andrew-

Thanks for the pointer. That got it!

curl -XGET 'http://localhost:9200/_ingest/pipeline'
(shows all of the pipelines in place)
curl -XDELETE 'http://localhost:9200/_ingest/pipeline/filebeat-6.3.1-system-auth-pipeline'
curl -XDELETE 'http://localhost:9200/_ingest/pipeline/filebeat-6.3.1-system-syslog-pipeline'
( I probably didn't need to remove both of the pipelines, but just to be sure)

curl -XGET 'http://localhost:9200/_ingest/pipeline'
( showed a much shorter list)

I'm a little surprised that filebeat doesn't (or can't) update an existing pipeline when parameters are updated in the Filebeat config. Is there a reason for that?

--
Ray Frush
Colorado State University

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.