Adding fields when using autodiscovery

I have been trying to add custom fields to logs being picked up by filebeat when running in kubernetes using a DaemonSet. My logging provider requires this and I'm having quite a bit of trouble simply adding fields based on conditions. This is my base configuration:

filebeat.config:
  prospectors:
    # Mounted `filebeat-prospectors` configmap:
    path: ${path.config}/prospectors.d/*.yml
    # Reload prospectors configs as they change:
    reload.enabled: false
    fields_under_root: true
  modules:
    path: ${path.config}/modules.d/*.yml
    # Reload module configs as they change:
    reload.enabled: false
filebeat.autodiscover:
  providers:
    - type: kubernetes
      hints.enabled: true
      include_annotations: ['something/logging']
      templates:
        - condition.contains:
            kubernetes.labels.logzio_subaccount: 'dantest'
          config:
            log:
              input:
                type: docker
                containers.ids:
                  - ${data.kubernetes.container.id}
  appenders:
    - type: config
      condition.equals:
        kubernetes.labels.logzio_subaccount: 'dantest'
      config:
        fields:
          logzio_codec: json
          token: xxx
          type: DANTEST

processors:
- add_cloud_metadata: ~
- drop_event:
    when:
      not:
        equals:
          kubernetes.annotations.something/logging: 'true'
output.logstash:
  hosts:
    - listener.logz.io:5015
  ssl:
    enabled: true
    certificate_authorities:
      - /etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt

When running filebeat in debug I am seeing the following output being sent to the provider

2018-11-16T20:55:23.292Z	DEBUG	[publish]	pipeline/processor.go:308	Publish event: {
  "@timestamp": "2018-11-16T20:55:21.762Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.4.3"
  },
  "source": "/var/lib/docker/containers/0401c37040e9ab46daa6f05372172c2ade413132f59ae04818e55ab86fa18b22/0401c37040e9ab46daa6f05372172c2ade413132f59ae04818e55ab86fa18b22-json.log",
  "stream": "stdout",
  "prospector": {
    "type": "docker"
  },
  "input": {
    "type": "docker"
  },
  "kubernetes": {
    "namespace": "default",
    "labels": {
      "logzio_subaccount": "dantest"
    },
    "annotations": {
      "something/logging": "true"
    },
    "pod": {
      "name": "dantest"
    },
    "node": {
      "name": "ip-172-16-42-235.us-west-2.compute.internal"
    },
    "container": {
      "name": "dantest"
    }
  },
  "host": {
    "name": "filebeat-c2bh6"
  },
  "beat": {
    "name": "filebeat-c2bh6",
    "hostname": "filebeat-c2bh6",
    "version": "6.4.3"
  },
  "meta": {
    "cloud": {
      "availability_zone": "us-west-2c",
      "provider": "ec2",
      "instance_id": "i-xxx",
      "machine_type": "m4.2xlarge",
      "region": "us-west-2"
    }
  },
  "message": "{\"status\": \"boomshakalaka\"}",
  "offset": 2013156
}

Should I expect to see the custom fields from my configuration (logzio_codec etc...) in this output? I have tried a variety of things like moving the fields: around to different places in my config but still have not had any luck getting them to show up in the debug output. Hopefully I'm just doing something dumb and the answer is obvious, but based on the documentation I'm unclear on what else I can really do to debug this further.

Incase someone runs into this, here is the solution:

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          hints.enabled: true
          include_annotations: ['something/logging']
          templates:
            - condition.contains:
                kubernetes.labels.logzio_subaccount: dantest
              config:
                - type: docker
                  containers.ids:
                    - ${data.kubernetes.container.id}
                  fields:
                    logzio_codec: json
                    token: xxx
                    type: DANTEST
                  fields_under_root: true
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.