Hi all. I just wanted to confirm my thinking with what I'm trying to achieve.
We currently have an version 7.6.2 ES stack running on kubernetes in Azure AKS.
The ES audit logs are currently being sent to stdout (so available as pod logs).
I was thinking I could create a filebeat pod to collect those logs, but it seems the wrong way to go about it? I was taking this route because we already have metricbeat setup in this fashion to collect system stats.
Am I right in thinking we should have the audit logs written to disk in the pods, and then install filebeat in each ES pod to hoover them up?
Yes, I've followed that guide now, and have the daemonset up in our k8s cluster.
Now just figuring out the logic to collect the correct logs.
Currently looks to collecting ALL kubernetes pod logs, as there seems to be no way to filter on selected pod names.
Am I correct in thinking I could use the add_kubernetes_metedata processor to filter on namespace? So my filebeat would only be looking at logs in say the elastic namespace?
So I've got filebeat up and running on kubernetes. But It seems to be hoovering up it's own logs. and therefore just looping round creating messy logs which eventually just end up with loads of /////s
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.