Hi,
We are currently using filebeat+logstash to send our log files to Elasticsearch, and it works fine. However, we would like to send the files directly from filebeat in the future, but we are getting tons of errors like the one shown below. It looks like the problem is related to the auditd
module.
May 20 12:42:20 <hostname> filebeat: 2020-05-20T12:42:20.037+0200#011WARN#011[elasticsearch]#011elasticsearch/client.go:384#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbfa9603eecb270c6, ext:116471983, loc:(*time.Location)(0x594e5e0)}, Meta:{"pipeline":"filebeat-7.7.0-auditd-log-pipeline"}, Fields:{"agent":{"ephemeral_id":"e28f8910-c7e0-4982-b976-c33ac837060c","hostname":"<hostname>","id":"daf30849-9a42-4c10-a668-0c3dbd13d451","type":"filebeat","version":"7.7.0"},"ecs":{"version":"1.5.0"},"event":{"dataset":"auditd.log","module":"auditd"},"fileset":{"name":"log"},"host":{"name":"<hostname>"},"input":{"type":"log"},"log":{"file":{"path":"/var/log/audit/audit.log.1"},"offset":1555232},"message":"<message>","service":{"type":"auditd"}}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000c60270), Source:"/var/log/audit/audit.log.1", Offset:1555342, Timestamp:time.Time{wall:0xbfa9603eeb75077d, ext:95670131, loc:(*time.Location)(0x594e5e0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x47, Device:0xfc04}}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=403): {"type":"security_exception","reason":"action [indices:admin/create] is unauthorized for user [<username>]"}
Our configuration file for filebeats looks like this:
filebeat.inputs:
- type: log
enabled: true
fields:
index: foo
paths:
- /path1
- /path2
- type: log
enabled: true
fields:
index: bar
paths:
- /path1
- /path2
filebeat.modules:
- module: auditd
log.enabled: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template:
name: "foo"
pattern: "foo-*"
enabled: true
overwrite: false
setup.ilm:
enabled: false
output.elasticsearch:
hosts: ["<ip>"]
protocol: "http"
username: "<username>"
password: "<password>"
index: "%{[fields.index]}-%{+yyyy.MM.dd}"
The user in elasticsearch has access to the two indices foo
and bar
and both have been initialized with a template. Other log files are being indexed in Elastic and there are also entries from the audit log, so at least some of the audit log is processed correctly.