ES K8s (AKS) auditlogs via filebeat

Hi all. I just wanted to confirm my thinking with what I'm trying to achieve.

We currently have an version 7.6.2 ES stack running on kubernetes in Azure AKS.

The ES audit logs are currently being sent to stdout (so available as pod logs).

I was thinking I could create a filebeat pod to collect those logs, but it seems the wrong way to go about it? I was taking this route because we already have metricbeat setup in this fashion to collect system stats.

Am I right in thinking we should have the audit logs written to disk in the pods, and then install filebeat in each ES pod to hoover them up?

Thanks in advance.

Hi @VishalBhalla!

I would say that you can do this yes! Actually what you need is to deploy Filebeat as Daemonset in your k8s nodes and have it collecting the logs of the containers that you want to. See https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html

Regards!

Yes, I've followed that guide now, and have the daemonset up in our k8s cluster.
Now just figuring out the logic to collect the correct logs.

Currently looks to collecting ALL kubernetes pod logs, as there seems to be no way to filter on selected pod names.

Am I correct in thinking I could use the add_kubernetes_metedata processor to filter on namespace? So my filebeat would only be looking at logs in say the elastic namespace?

So I've got filebeat up and running on kubernetes. But It seems to be hoovering up it's own logs. and therefore just looping round creating messy logs which eventually just end up with loads of /////s

My filebeat yaml so far:

filebeat.modules:
  - module: elasticsearch

filebeat.inputs:
  - type: container
    paths: '/var/lib/docker/containers/*/*.log'
    processors:
    - add_kubernetes_metadata:
        namespace: "elastic"

output.console:
  #pretty: true

I just simply want to hoover up the audit logs that the elasticsearch, logstash and kibana pods create.

Help please? Thanks :slight_smile:

Right. Got this so far, and it kinda does what I need...I think:

filebeat.modules:
  - module: elasticsearch

filebeat.inputs:
  - type: container
    paths: '/var/lib/docker/containers/*/*.log'
    processors:
    - add_kubernetes_metadata:
        namespace: "elastic"

processors:
- copy_fields:
    fields:
      - from: agent.type
        to: type
    fail_on_error: true
    ignore_missing: false

- decode_json_fields:
    fields: [ "message" ]

- drop_event.when.or:
  - contains.kubernetes.pod.name: "filebeat"
  - not.equals.kubernetes.namespace: "elastic"
  - not.equals.message.type: "audit"

output.console:
  #pretty: true

If i'm doing anything silly, or if this can be improved in anyway, I'd appreciate the feedback, cheers.

Hi!

What you have is looking good. Also have a look in autodiscover (https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html) which I think can fit your case and help you to only collect logs from the services that you actually want.

Regards.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.