How to use pipelines/processors with autodiscover


(Denis Baryshev) #1

Hello I have the following config file,

and I'm already pretty lost and I don't know where to direct log-format json logs to my custom "json-message" pipeline or where to use "decode_json_fields".

Please help

filebeat.config:
  inputs:
    path: ${path.config}/prospectors.d/*.yml
    reload.enabled: false
  
  modules:
    path: /modules.d/*.yml
    reload.enabled: false
    
filebeat.autodiscover:
  providers:
  - hints.enabled: false
    templates:
    - condition:
        equals:
          kubernetes.labels.app: mongodb
      config:
      - log:
          input:
            containers.ids:
            - ${data.kubernetes.container.id}
            exclude_lines:
            - ^\s+[\-`('.|_]
            type: docker
        module: mongodb
    - condition:
        equals:
          kubernetes.labels.app: redis
      config:
      - log:
          input:
            containers.ids:
            - ${data.kubernetes.container.id}
            exclude_lines:
            - ^\s+[\-`('.|_]
            type: docker
        module: redis
        slowlog:
          enabled: false
    - condition:
        equals:
          kubernetes.labels.log-format: json
      config:
      - containers.ids:
        - ${data.kubernetes.container.id}
        exclude_lines:
        - ^\s+[\-`('.|_]
        type: docker
    type: kubernetes

filebeat.inputs: []

http.enabled: false
http.port: 5066

output.elasticsearch:
  hosts:
  - logs-elasticsearch-client:9200

output.file:
  filename: filebeat
  number_of_files: 5
  path: /usr/share/filebeat/data
  rotate_every_kb: 10000

output.file.enabled: false

processors:
- add_cloud_metadata: null

setup.kibana:
  host: http://logs-kibana:5601

setup.template:
  enabled: true
  overwrite: false
  settings:
    index.number_of_replicas: 1
    index.number_of_shards: 1

(Steffen Siering) #2

Is there any specific input/module not working correctly for you?

You configured mostly modules. The last one is the only input. Here you can add additional processor configurations or use the pipeline setting to forward processing for this input to Ingest Node. e.g.:

    - condition:
        equals:
          kubernetes.labels.log-format: json
      config:
      - type: docker
        containers.ids:
          - ${data.kubernetes.container.id}
        exclude_lines:
          - ^\s+[\-`('.|_]
        processors:
          - decode_json_fields:
              fields: ["message"]

I just added the json processor to your docker input.

Tip: add type or module to the beginning of a configuration block. This makes it easier to see what will be actually configured.