Hey guys so I am having issues with the System module for Filebeats 6.2.3 not actually uploading the syslog or message log on either Cent or Debian systems. I am getting data from the icinga module on these systems, so my elastic server is reciving information and storing it but alas I have been trying to research a solution for days now for my issue, and i know its probably something very simple that I am missing because I am so new to this. Being so new I do not know what information is required, so please let me know if I am missing anything.
Issue is happening on Centos7 and Ubuntu 16.04. System module is enabled, though not sending any data to elastic. Below is my config file for filebeat. Thanks again guys.
2018-03-28T11:16:02.143-0400 INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-28T11:16:02.143-0400 INFO instance/beat.go:475 Beat UUID: 86e97091-703c-4735-b170-fc2c435279bb
2018-03-28T11:16:02.143-0400 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.3
2018-03-28T11:16:02.143-0400 INFO elasticsearch/client.go:145 Elasticsearch url: http://atlptgelk-dev1:9200
2018-03-28T11:16:02.144-0400 INFO pipeline/module.go:76 Beat name: atlptgnag-dev1.turner.com
2018-03-28T11:16:02.144-0400 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-03-28T11:16:02.144-0400 INFO instance/beat.go:301 filebeat start running.
2018-03-28T11:16:02.144-0400 INFO registrar/registrar.go:108 Loading registrar data from /var/lib/filebeat/registry
2018-03-28T11:16:02.145-0400 INFO registrar/registrar.go:119 States Loaded from registrar: 17
2018-03-28T11:16:02.145-0400 INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-03-28T11:16:02.146-0400 INFO log/prospector.go:111 Configured paths: [/var/log/*.log]
2018-03-28T11:16:02.149-0400 INFO log/prospector.go:111 Configured paths: [/var/log/icinga2/debug.log*]
2018-03-28T11:16:02.150-0400 INFO log/prospector.go:111 Configured paths: [/var/log/icinga2/icinga2.log*]
2018-03-28T11:16:02.151-0400 INFO log/prospector.go:111 Configured paths: [/var/log/icinga2/startup.log]
2018-03-28T11:16:02.157-0400 INFO log/prospector.go:111 Configured paths: [/var/log/auth.log* /var/log/secure*]
2018-03-28T11:16:02.161-0400 INFO log/prospector.go:111 Configured paths: [/var/log/messages* /var/log/syslog*]
2018-03-28T11:16:02.161-0400 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 1
2018-03-28T11:16:02.161-0400 INFO cfgfile/reload.go:127 Config reloader started
2018-03-28T11:16:02.165-0400 INFO log/prospector.go:111 Configured paths: [/var/log/icinga2/debug.log*]
2018-03-28T11:16:02.167-0400 INFO log/prospector.go:111 Configured paths: [/var/log/icinga2/icinga2.log*]
2018-03-28T11:16:02.167-0400 INFO log/prospector.go:111 Configured paths: [/var/log/icinga2/startup.log]
2018-03-28T11:16:02.173-0400 INFO log/prospector.go:111 Configured paths: [/var/log/auth.log* /var/log/secure*]
2018-03-28T11:16:02.183-0400 INFO log/prospector.go:111 Configured paths: [/var/log/messages* /var/log/syslog*]
2018-03-28T11:16:02.183-0400 INFO cfgfile/reload.go:258 Starting 2 runners ...
2018-03-28T11:16:02.183-0400 INFO elasticsearch/client.go:145 Elasticsearch url: http://atlptgelk-dev1:9200
2018-03-28T11:16:02.187-0400 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-28T11:16:02.191-0400 INFO elasticsearch/client.go:145 Elasticsearch url: http://atlptgelk-dev1:9200
2018-03-28T11:16:02.191-0400 INFO log/harvester.go:216 Harvester started for file: /var/log/icinga2/debug.log
2018-03-28T11:16:02.191-0400 INFO log/harvester.go:216 Harvester started for file: /var/log/icinga2/icinga2.log
2018-03-28T11:16:02.194-0400 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-28T11:16:02.198-0400 INFO cfgfile/reload.go:219 Loading of config files completed.
2018-03-28T11:16:03.195-0400 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-28T11:16:03.197-0400 INFO template/load.go:73 Template already exists and will not be overwritten.
2018-03-28T11:16:22.200-0400 INFO log/harvester.go:216 Harvester started for file: /var/log/secure
2018-03-28T11:16:32.146-0400 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":38},"total":{"ticks":110,"time":120,"value":110},"user":{"ticks":80,"time":82}},"info":{"ephemeral_id":"c047f3b7-3f32-41da-aa2d-449cc7f66425","uptime":{"ms":30012}},"memstats":{"gc_next":4199840,"memory_alloc":2669280,"memory_total":13158672,"rss":18026496}},"filebeat":{"events":{"added":218,"done":218},"harvester":{"open_files":3,"running":3,"started":3}},"libbeat":{"config":{"module":{"running":2,"starts":2},"reloads":1},"output":{"events":{"acked":184,"batches":24,"total":184},"read":{"bytes":13216},"type":"elasticsearch","write":{"bytes":112489}},"pipeline":{"clients":11,"events":{"active":0,"filtered":34,"published":184,"retry":11,"total":218},"queue":{"acked":184}}},"registrar":{"states":{"current":17,"update":218},"writes":58},"system":{"cpu":{"cores":2},"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.025,"5":0.005}}}}}}
It seems like that you've run Filebeat before, because it has a registry file. Are you sure you haven't had Filebeat send those logs before? Filebeat does not reread event which had been sent already to the output.
I have started the filebeat service before. I've checked all my indexes and I don't see any information from the syslog, nor any new information from it.
Mar 28 12:42:05 atlptgnag-dev1 systemd: Started Session 5602 of user root.
Mar 28 12:42:05 atlptgnag-dev1 systemd-logind: New session 5602 of user root.
Mar 28 12:42:05 atlptgnag-dev1 systemd: Starting Session 5602 of user root.
Mar 28 12:42:05 atlptgnag-dev1 dbus[560]: [system] Activating service name='org.freedesktop.problems' (using servicehelper)
Mar 28 12:42:05 atlptgnag-dev1 dbus-daemon: dbus[560]: [system] Activating service name='org.freedesktop.problems' (using servicehelper)
Mar 28 12:42:05 atlptgnag-dev1 dbus[560]: [system] Successfully activated service 'org.freedesktop.problems'
Mar 28 12:42:05 atlptgnag-dev1 dbus-daemon: dbus[560]: [system] Successfully activated service 'org.freedesktop.problems'
Mar 28 12:44:15 atlptgnag-dev1 systemd: Stopping filebeat...
Mar 28 12:44:15 atlptgnag-dev1 systemd: Stopped filebeat.
Mar 28 12:50:01 atlptgnag-dev1 systemd: Started Session 5603 of user root.
Mar 28 12:50:01 atlptgnag-dev1 systemd: Starting Session 5603 of user root.
Ah I am =) the Icinga module is working correctly to pull Icinga logs. The system Module is not pulling the system logs at all, thats the issues im running into =(
System module is sending data like it should and it is being logged, it looks like a time and date issues ( I never went more than 4 hours back and being Eastern Time UTC is -5:00 ) So the question is now. Why are time stamps being saved as two different times and dates in Elastic. Is this a configuration issue somewhere else ? Posted a snippet noticed the @timestamp is 11:00 and the Time in the syslog is 15:00 with a -4 ? )
@timestamp is added by Filebeat. This value is the time the log was read. system.syslog.timestamp contains the timestamp in the log. It can be different in multiple cases. E.g you send old logs to ES or the hosts of Filebeat and Elasticsearch are in different time zones.
Interesting. Looks like this continued to happen overnight to all the current logs, not just the ones previously ingested by filebeat, everything seems 5 hours in the past. As for the physical systems they are on the same Vmfarm connected to the same NTP server and all appear to be in sync.
Going to try this from the manual and let you know what i find.
var.convert_timezone
If this option is enabled, Filebeat reads the local timezone and uses it at log parsing time to convert the timestamp to UTC. The local timezone is also added in each event in a dedicated field (beat.timezone). The conversion is only possible in Elasticsearch >= 6.1. If the Elasticsearch version is less than 6.1, the beat.timezone field is added, but the conversion to UTC is not made. The default is false.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.