We would like to use filebeat to collect audit logs across several systems. We put our Elastic cluster behind a security VLAN, and don't want to open up ports to the public.
So the best solution here is to simply put a Logstash forwarder on our public Syslog box, as that already has a public facing IP, and is easier to monitor.
To get the pipelines/templates created, I ran an initial filebeat with the auditd module inside our network. But now, I want to use Logstash to forward, and I can't seem to get the pipeline to tear apart the messages coming from Logstash.
My Logstash config looks like this:
input {
beats {
port => 5044
host => "0.0.0.0"
add_field => { "log_source" => "syslog" }
type => "log"
}
}
output {
elasticsearch {
hosts => [ "10.0.0.1" ]
index => "filebeat-%{+YYYY.MM.dd}"
pipeline => "filebeat-5.4.0-auditd-log-pipeline"
}
}
But I'm seeing the following errors (can see them in Kibana):
error: Provided Grok expressions do not match field value: [May 24 18:59:52 ES01 sudo: ubuntu : TTY=pts/1 ; PWD=/var/log/elasticsearch ; USER=root ; COMMAND=/usr/sbin/service filebeat restart]
message: May 24 18:59:52 ES01 sudo: ubuntu : TTY=pts/1 ; PWD=/var/log/elasticsearch ; USER=root ; COMMAND=/usr/sbin/service filebeat restart
If I send this stuff directly from Filebeat, it works properly.
What am I doing wrong to be able to forward this stuff from Logstash?
I guess the other option is to write it to a syslog file, and then read it out again with Filebeat, as we're writing it to file anyway on the syslog box. But I'd need some help on that Filebeat config.