How to add pod annotations to filebeat documents?

Hi folks :wave:

Am running filebeat within my cluster but I can't figure out how to add fields to my log documents which would include the pod's annotations. That's super relevant for search later.

Here's my current configuration:

    filebeat.autodiscover:
      providers:
      - type: kubernetes
        in_cluster: true
        hints.enabled: false
        labels.dedot: false
        annotations.dedot: false
        resource: pod
        include_annotations:
          - "example_annotation"
        templates:
          - config:
              - module: kubernetes
                period: 10s
                add_metadata: true
                metricsets:
                  - state_node
                  - state_deployment
                  - state_daemonset
                  - state_replicaset
                  - state_pod
                  - state_container
                  - state_job
                  - state_cronjob
                  - state_resourcequota
                  - state_statefulset
                  - state_service

    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*preview-builds*.log
      processors:
        - add_kubernetes_metadata:
            host: "${NODE_NAME}"
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    output.elasticsearch:
      hosts: ["http://${ELASTICSEARCH_NAME}.${ELASTICSEARCH_NAMESPACE}.svc.cluster.local:9200"]
      username: "${ELASTICSEARCH_USERNAME}"
      password: "${ELASTICSEARCH_PASSWORD}"

With that, I do get log events, and they're annotated with some pod metadata, but nothing beyond what I'd get even if I didn't include all that part with the providers.

Can someone point out to me what's the correct way of including my annotations in these events?

As a note, I also double-checked my pods do include the labels I use in the config.

For reference, I'm on 8.6.2.

Hi @lucasdacosta Welcome to the community!

I did this in the past I believe but will need to look.

Perhaps take a looked at this

Specifically

add_resource_metadata

And

include_annotations

I don't think you need to add the matcher logic.

If I get a chance I will see if I can get mine working again.. definitely advanced topic.

Put those under the
add_kubernetes_metadata under the provider

EDIT : hmmm looking at pods annotations ...

Ok I spent some time and I do not see an easy way to get pod annotations to work BUT

all the values ARE available... so here is my workaround / hack :slight_smile: but it works.

Here is a sample deployment manifest

apiVersion: v1
kind: ServiceAccount
metadata:
  name: product-catalog
  namespace: product-catalog
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: productcatalogservice
  namespace: product-catalog
spec:
  replicas: 2
  selector:
    matchLabels:
      app: productcatalogservice
  template:
    metadata:
      labels:
        app: productcatalogservice
      annotations:
        co.elastic.monitor/type: "tcp"
        co.elastic.monitor/schedule: "@every 10s"
        co.elastic.monitor/timeout: "5s"
        co.elastic.monitor/name: "product-catalog-pod"
        co.elastic.monitor/port: "3550"
        co.elastic.monitor/proc.0: "add_fields"
        co.elastic.monitor/proc.1: "add_geo"
    spec:
      serviceAccountName: product-catalog
      terminationGracePeriodSeconds: 5
      containers:
      - name: server
        image: gcr.io/google-samples/microservices-demo/productcatalogservice:v0.3.6
        ports:
        - containerPort: 3550
        env:
        - name: PORT
          value: "3550"
        - name: DISABLE_STATS
          value: "1"
        - name: DISABLE_TRACING
          value: "1"
        - name: DISABLE_PROFILER
          value: "1"
        # - name: JAEGER_SERVICE_ADDR
        #   value: "jaeger-collector:14268"
        readinessProbe:
          exec:
            command: ["/bin/grpc_health_probe", "-addr=:3550"]
        livenessProbe:
          exec:
            command: ["/bin/grpc_health_probe", "-addr=:3550"]
        resources:
          requests:
            cpu: 100m
            memory: 64Mi
          limits:
            cpu: 200m
            memory: 128Mi

Then this is what I did... even if the pod include_annotations worked for pods (which it looks like it only works for node or namespace it does not work with wild cards so you would have to name each annotation you would want anyways... so instead I just added them as fields ...

here is my example

data:
  filebeat.yml: |-
    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    filebeat.autodiscover:
     providers:
       - type: kubernetes
         node: ${NODE_NAME}
         hints.enabled: true
         hints.default_config:
           type: container
           paths:
             - /var/log/containers/*${data.kubernetes.container.id}.log
           # Set Fields 
           fields_under_root: true
           fields:
             # You can see the whole kubernetes object... 
             fields.test: "test"
           # Set Fields 
           fields_under_root: true
           fields:
             # You can see the whole kubernetes object... 
             fields.test: "test"
             fields.annotations.co.elastic.monitor/type: "${data.kubernetes.annotations.co.elastic.monitor/type}"
             fields.annotations.co.elastic.monitor/schedule: "${data.kubernetes.annotations.co.elastic.monitor/schedule}"
             fields.annotations.co.elastic.monitor/timeout: "${data.kubernetes.annotations.co.elastic.monitor/timeout}"

And now my ingested document has

    "fields": {
      "test": "test",
      "annotations": {
        "co": {
          "elastic": {
            "monitor/timeout": "5s",
            "monitor/type": "tcp",
            "monitor/schedule": "@every 10s"

1 Like

Thanks, Stephen! Just tested it, and it worked.

I was also able to remove the inputs part as you mentioned indeed.

Thanks a lot for the quick response, really appreciate it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.