Ship logs of application using Filbeat

Hi,
I have a kubernetes cluster on which I have deployed the elastic stack using ECK. I have several microservices deployed in the cluster as pods and I want to fetch logs from particular microservice pods.
Also I have created a specific json log file for every microservice and I want to ship that particular file using filebeat. With my current configuration filebeat is sending logs of every container.
One more thing all the logs of microservices pods are being saved on persistent volume. Is there a way to mount that volume to filebeat and get those particular logs.

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: filebeat-dev
  namespace: dev
spec:
  type: filebeat
  version: 8.2.2
  kibanaRef:
    name: kibana-dev    
  config:
    filebeat.inputs:
    - type: container
      paths:
      - /var/log/containers/*.log
    output.kafka:
      codec.json:
        pretty: true
        escape_html: false
      hosts: ["kafka-svc:9092"]
      topic: 'filebeat_topic'
      partition.round_robin:
      reachable_only: false
      required_acks: 1
      compression: gzip
      max_message_bytes: 1000000
  daemonSet:
    podTemplate:
      spec:
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
        securityContext:
          runAsUser: 0
        containers:
        - name: filebeat
          volumeMounts:
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
        volumes:
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

Hi @shivendra95,

Yes, you can mount the volume into Filebeat's pod and read the logs directly from there. On a quick glance over your manifest, it seems you've done it already:

        - name: filebeat
          volumeMounts:
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
        volumes:
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

Are those the volumes holding the logs you want?

The configuration for a Filebeat running on Kubernetes is essentially no different from a Filebeat running directly on a Linux box: Filebeat will read log files from a directory, the only difference there is that when Filebeat is on a container (like on Kubernetes), you need to mount the logs from the host into the container.

In your case, it seems, you just need to be mode specific on which files you want to read, because the configuration you posted is reading all files on:

- /var/log/containers/*.log

Take a look at the documentation of the container input (Container input | Filebeat Reference [8.2] | Elastic) for a more detailed overview of the possible options.

So I want to fetch logs specifically from two pods and mount path for both of them is different.
For one of them its /logistic and for one of them is /rental.

Where do I need to change the mount path?

It's the pahts field of the container input configuration, our documentation is quite detailed and contains some examples on how to do it.

Read it carefully and will find the bits you need to update on your Filebeat configuration/Kubernetes manifest file.

Hi @TiagoQueiroz ,
In my case filebeat is sending logs of all containers, I want to restrict that and ship logs of only specific containers.
Is this possible through filebeat?

There are multiple ways of doing that:

  1. You can specifiy specific files/file paths. If you look at /var/log/containers/, you'll notice that the deployment is part of the file, e.g: coredns-64897985d-hcqbt_kube-system_coredns-0ae69813124edbc953d6b5db8a91585c01f80e538a3e0723ea55697c9988f5eb.log
  2. You can use some processors to filter out some events/files. Things like podname, namespace, etc should be part of the event and allow for dropping the event.
  3. You can use exclude lines

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.