Use k8s provider fields in filebeat config

Hello everyone!

We are deploying the Elastic Agent as a daemonset to slurp up our container logs using hints based autodiscovery.

This works and we can selectively parse pods based on the following hint:

    podTemplate:
      metadata:
        annotations:
          co.elastic.hints/package: "container_logs_ecs"

The standalone agent.yml provider section:

...
providers:
  kubernetes:
    node: ${NODE_NAME}
    scope: node
    include_annotations: true
    include_labels: true
    hints:
      default_container_logs: false
      enabled: true
...

We mount the following file on the Agent pod through its configmap and it works fine:

# container_logs_ecs.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: external-inputs
  namespace: kube-system
  labels:
    k8s-app: elastic-agent
data:
  container_logs_ecs.yml: |-
    inputs:
      - name: hints-filestream-container-logs
        id: hints-filestream-container-logs-${kubernetes.hints.container_id}
        type: filestream
        use_output: default
        streams:
          - condition: ${kubernetes.hints.container_logs_ecs.enabled} == true
            data_stream:
              dataset: cps-test
              type: logs
            parsers:
              - container:
                  format: auto
                  stream: ${kubernetes.hints.container_logs.stream|'all'}
              - ndjson:
                  target: ""
                  ignore_decoding_error: false
                  expand_keys: true
                  add_error_key: true
            paths:
              - /var/log/containers/*${kubernetes.hints.container_id}.log
            prospector:
              scanner:
                symlinks: true
        data_stream.namespace: default

What we want to achieve is to send the logs to a different datastream based on its kubernetes fields by changing the dataset field:

dataset: cps-${data.kubernetes.namespace}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: external-inputs
  namespace: kube-system
  labels:
    k8s-app: elastic-agent
data:
  container_logs_ecs.yml: |-
    inputs:
      - name: hints-filestream-container-logs
        id: hints-filestream-container-logs-${kubernetes.hints.container_id}
        type: filestream
        use_output: default
        streams:
          - condition: ${kubernetes.hints.container_logs_ecs.enabled} == true
            data_stream:
              dataset: cps-${data.kubernetes.namespace}
              type: logs
            parsers:
              - container:
                  format: auto
                  stream: ${kubernetes.hints.container_logs.stream|'all'}
              - ndjson:
                  target: ""
                  ignore_decoding_error: false
                  expand_keys: true
                  add_error_key: true
            paths:
              - /var/log/containers/*${kubernetes.hints.container_id}.log
            prospector:
              scanner:
                symlinks: true
        data_stream.namespace: default

The dynamic config then does not include configuration for this pod.

How can I access fields returned by the kubernetes provider?
Is there somewhere a way to see all available fields under kubernetes.hints.* or data.*?

Thanks!

I eventually switched to conditions based autodiscovery.
All the kubernetes.* fields are then available.

Is this maybe a bug in hints based autodiscovery?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.