Filebeat - Syslog module no auth.logs in Elastic

HI,
Just enabled the filebeat module - syslog on my Ubuntu 16.0.4 box. I'm getting syslog output into Elastic but not the auth.log files.
I enabled debug from in the filebeat.yml file and can see the following so I'm assuming that logs are getting fed up to Elastic....

Module syslog is enabled.

I even tried hard setting the location of my auth.log file manually (/var/log/auth.log) but no go.

Output of my debug... (/var/log/filebeat/filebreat):

2018-03-24T18:06:40.301+1100	DEBUG	[harvester]	log/log.go:85	End of file reached: /var/log/auth.log; Backoff now.
2018-03-24T18:06:40.302+1100	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-03-24T07:06:40.301Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.3",
    "pipeline": "filebeat-6.2.3-system-auth-pipeline"
  },
  "message": "Mar 24 18:06:36 els01 sshd[2332]: Accepted password for root from 192.168.10.8 port 59525 ssh2",
  "source": "/var/log/auth.log",
  "offset": 290639,
  "fileset": {
    "module": "system",
    "name": "auth"
  },
  "prospector": {
    "type": "log"
  },
  "beat": {
    "name": "els01",
    "hostname": "els01",
    "version": "6.2.3"
  }
}
2018-03-24T18:06:40.302+1100	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-03-24T07:06:40.301Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.3",
    "pipeline": "filebeat-6.2.3-system-auth-pipeline"
  },
  "offset": 290737,
  "message": "Mar 24 18:06:36 els01 sshd[2332]: pam_unix(sshd:session): session opened for user root by (uid=0)",
  "source": "/var/log/auth.log",
  "fileset": {
    "module": "system",
    "name": "auth"
  },
  "prospector": {
    "type": "log"
  },
  "beat": {
    "name": "els01",
    "hostname": "els01",
    "version": "6.2.3"
  }
}
2018-03-24T18:06:40.314+1100	DEBUG	[harvester]	log/log.go:85	End of file reached: /var/log/syslog; Backoff now.
2018-03-24T18:06:41.302+1100	DEBUG	[harvester]	log/log.go:85	End of file reached: /var/log/auth.log; Backoff now.

Could you share your configuration for the system module?

From the filebeat output, it looks like messages from /var/log/auth.log are in fact being sent over to Elasticsearch. Do you see them in Discover? Look for source:/var/log/auth.log there. Also, make sure you look at a bigger time span, not only the last 15 minutes.

Hi. output of my system.yml file:

root@els01:~# cat /etc/filebeat/modules.d/system.yml 
- module: system
  # Syslog
  syslog:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  # Authorization logs
  auth:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    var.paths: ["/var/log/auth.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
root@els01:~# 

This is the contents of my auth.log file but searching in Elastic returns nothing.

root@els01:~# tail /var/log/auth.log
Mar 27 07:29:08 els01 sshd[9629]: Disconnected from 192.168.10.101 port 59706
Mar 27 07:29:08 els01 sshd[9629]: pam_unix(sshd:session): session closed for user root
Mar 27 07:29:08 els01 systemd-logind[1138]: Removed session 542.
Mar 27 07:29:11 els01 sshd[9715]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.101 user=root
Mar 27 07:29:12 els01 sshd[9715]: Failed password for root from 192.168.10.101 port 59756 ssh2
Mar 27 07:29:15 els01 sshd[9715]: Accepted password for root from 192.168.10.101 port 59756 ssh2
Mar 27 07:29:15 els01 sshd[9715]: pam_unix(sshd:session): session opened for user root by (uid=0)
Mar 27 07:29:15 els01 systemd-logind[1138]: New session 544 of user root.
Mar 27 07:35:01 els01 CRON[9761]: pam_unix(cron:session): session opened for user root by (uid=0)
Mar 27 07:35:01 els01 CRON[9761]: pam_unix(cron:session): session closed for user root
root@els01:~#

I found something!
Log gets uploaded in GMT?
Do I need to add my local timezone in the system.yml file?
If yes what would be the best way?

Help?

Have you tried to enable timezone conversion by setting convert_timezone: true?

If convert_timezone is enabled, the add_locale processor is enabled. This will add the systems locale information to the events (check your systems locale actually representing your current timezone). The second effect of setting this to true is, the date filter parsing your timestamp will use the events locale information to adjust the timestamp.

Parsing is done in Elasticsearch Ingest Node. After setting the flag to true, the ingest pipeline must be reconfigured. To do so, either delete the ingest pipeline from your Elasticsearch before restarting filebeat or run filebeat with -update-pipelines to force an update the the ingest pipelines.

Ok thanks. I have updated my system.yml, enabled convert_timezone: true. I tried the command filebeat -update-pipelines but that switch is not available. Do you have the switch handy?

Thanks.

Steps I did to delete my ingest pipeline:

  1. Stopped Filebeat
  2. ran: curl -XDELETE "http://localhost:9200/_ingest/pipeline/filebeat-6.2.3-system-syslog-pipeline?pretty"
  3. Deleted filebeat indices
  4. edit /etc/filebeat/modules/system.yml (tried convert_timezone to true).
  5. restarted filebeat.

Logs still show time at 11hrs in front (I am in Aust and we are GMT + 11).

I tried the above procedure but this time set convert_timezone to false but still 1hrs in front.

Looks to me like the convert_timezone does nothing.

What else can we try?

Have you checked the machines locale setting?

Yes all my machines were originally built from 1 standard image.
output from 'locale'.

LANG=en_AU.UTF-8
LANGUAGE=en_AU:en
LC_CTYPE="en_AU.UTF-8"
LC_NUMERIC="en_AU.UTF-8"
LC_TIME="en_AU.UTF-8"
LC_COLLATE="en_AU.UTF-8"
LC_MONETARY="en_AU.UTF-8"
LC_MESSAGES="en_AU.UTF-8"
LC_PAPER="en_AU.UTF-8"
LC_NAME="en_AU.UTF-8"
LC_ADDRESS="en_AU.UTF-8"
LC_TELEPHONE="en_AU.UTF-8"
LC_MEASUREMENT="en_AU.UTF-8"
LC_IDENTIFICATION="en_AU.UTF-8"
LC_ALL=

Timezone is also correct and so is the time across all the Servers.

root@log01:~# timedatectl
      Local time: Wed 2018-04-04 19:48:55 AEST
  Universal time: Wed 2018-04-04 09:48:55 UTC
        RTC time: Wed 2018-04-04 09:48:55
       Time zone: Australia/Sydney (AEST, +1000)
 Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no

Have you deleted the right pipeline? You've deleted system-syslog, but we've been looking at system-auth events :wink:

Can you get us another debug output with timezone enabled in beats? Makes me wonder if filebeat is correctly picking up the timezone setting.

Makes me wonder, which timezone kibana and elasticsearch are using? Keep in mind, that beats and Elasticsearch use UTC-0 by default.

Checking the event, You have 2 timestamps: @timestamp and system.auth.timestamp. The system.auth.timestamp is the value read directly from the log message. The @timestamp is compute from system.auth.timestamp by the data process.

When setting convert_timezone: true, the data processor executed:

{
  "date": {
    "field": "system.auth.timestamp",
    "target_field": "@timestamp",
    "formats": [
      "MMM  d HH:mm:ss",
      "MMM dd HH:mm:ss"
    ],
    "ignore_failure": true,
    "timezone": "{{ beat.timezone }}"
  }
}

if #var.convert_timezone: false, the date filter assumes the data to be in UTC-0 ->

{
  "date": {
    "field": "system.auth.timestamp",
    "target_field": "@timestamp",
    "formats": [
      "MMM  d HH:mm:ss",
      "MMM dd HH:mm:ss"
    ],
    "ignore_failure": true,
    "timezone": "UTC"
  }
}

Digging into the date processors code, I found the date processor only sets the timezone in the parsed timestamp. The processor requires the time zone ID to be either in 'long format' or using an offset (according to the java doc).

Elasticsearch stores the @timestamp in UTC. You should try to query some documents from the index and check the @timestamp field actually matches your expectation.

Kibana finally displays timestamps in the local browsers timezone. Which is kinda interesting, as you are in +1000 now, but your initial report has been from when you've been in +1100 (timezone change on 1.April, hahahah :smiley: ).

So, given your original snapshot, this is the sequence of operations that might have occured:

  1. read system.auth.timestamp from message
  2. parse into @timestamp and set UTC-0
  3. store timestamp as is (using UTC-0)
  4. retrieve logs via kibana and add local timezone -> @timestamp + 11h.

Using the locale, this is the sequence we want to have:

  1. read system.auth.timestamp from message
  2. parse into @timestamp and set UTC-10 (based on locale information)
  3. store timestamp using UTC-0. -> @timestamp - 10h
  4. retrieve logs via kibana and add local timezone -> @timestamp + 10h.

I wonder if something went wrong on the setup, or beats is reporting the wrong timezone. For debugging purposes it would be nice to have a look at one single event only and get us the output of:

  • Unprocessed published event via beats debug output. Run filebeat with -d 'publish' (or add publish selector to filebeat config and set log level to debug).
  • The event as it is stored in Elasticsearch. Better try to query the event via curl.
  • Event output in kibana discovery

For testing/debugging, I'd use a log file with one event only. This way you don't have to search for the event.

For starters, can you get us the output of curl -XGET http://localhost:9200/_ingest/pipeline/filebeat-6.2.3-system-auth-pipeline?pretty?

Hmm, I'm getting the same issue here, sending syslog data with filebeat 6.2.3 to an Elastic Cloud instance...

The pipeline which gets created in the Elastic Cloud instance has no mention of timezone:

I then get data like this in Elasticsearch (note, I'm in the UK, so data is "only" an hour out, due to DST):

@timestamp:April 5th 2018, 14:01:01.000 system.syslog.hostname:exampleserver system.syslog.program:systemd system.syslog.message:Starting Session 10967 of user root. system.syslog.timestamp:Apr 5 13:01:01 offset:1,126,786 beat.hostname:exampleserver.example.com beat.name:exampleserver.example.com beat.version:6.2.3 prospector.type:log source:/var/log/messages fileset.module:system fileset.name:syslog _id:6ieulWIBZZhbiTn_q-IK _type:doc _index:filebeat-6.2.3-2018.04.05 _score: -

(Editted to add...)
For completeness, this is the corresponding syslog line:

Apr 5 13:01:01 exampleserver systemd: Starting Session 10967 of user root.

Also, the system's timedatectl:

  Local time: Thu 2018-04-05 13:09:14 BST
  Universal time: Thu 2018-04-05 12:09:14 UTC
    RTC time: Thu 2018-04-05 12:09:14
   Time zone: Europe/London (BST, +0100)
 NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
  DST active: yes
 Last DST change: DST began at
              Sun 2018-03-25 00:59:59 GMT
              Sun 2018-03-25 02:00:00 BST
 Next DST change: DST ends (the clock jumps one hour backwards) at
              Sun 2018-10-28 01:59:59 BST
              Sun 2018-10-28 01:00:00 GMT

OK, so I think I solved this one myself...the template /etc/filebeat/modules.d/system.yml.disabled from a previous version of filebeat (6.1.x) had:

# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
convert_timezone: true

instead of

# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
var.convert_timezone: true

I updated the config to the latter form, deleted the pipelines and restarted filebeat, and the times appear to be sorted now.

1 Like

It's working!
Had to stop filebeat on all nodes then delete the ingest pipeline then todays index.
Adjusted the system.yml with var.convert_timezone to be true (and unremarked).
Then started filebeat on 1 node to be sure.. send some bogus ssh logins checked filebeat index and it had the correct time.
started rest of the nodes filebeat and sent through some more bogus ssh connections.. they all have right time. Send some syslogs (by restarting metricbeat to generate logs) and again checked filebeat, all good!

Thanks for your help.

Ran this to delete my ingest pipeline:

curl -XDELETE 'localhost:9200/_ingest/pipeline/filebeat-6.2.3-system*?pretty'

Output of my /etc/filebeat/modules.d/system.yml:

root@els01:~# cat /etc/filebeat/modules.d/system.yml 
- module: system
  # Syslog
  syslog:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    var.convert_timezone: true

  # Authorization logs
  auth:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    #var.paths: ["/var/log/auth.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    var.convert_timezone: true
root@els01:~# 

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.