Filebeat with customized config and ingest_pipeline

I've deployed eck cluster (Kibana, elasticsearch, filebeat) and want to pass old logstash filters (logstash.conf) as ingest pipeline in elasticsearch. Fortunately, I am able to do the same with different processors.

when i've tested the ingest pipeline with actual index. It was working smooth as expected. It was generating new fields from the "message" field.
also didn't faced any syntax issue, while I've configured filebeat config to use "ingest_pipeline: test-pipeline".

But while I was searching for a specific field which has to be created from the message field through ingest pipeline, it wasn't there.

I don't know how to debug and solve this issue ?? here is filebeat.yaml

    filebeat.idle_timeout: 10s
    filebeat.spool_size: 1024
    logging.level: info
      - type: log
        paths:
          - /var/log/apache2/*
        fields_under_root: true
        fields:
          doctype: apache_access
......
other conf. logs 
....

    processors:
    - add_kubernetes_metadata:
        host: ${NODE_NAME}
        matchers:
        - logs_path:
            logs_path: "/var/log/containers/"

    output.elasticsearch:
      host: ${NODE_NAME}
      hosts:
      - elasticsearch-test-es-default:9200
      password: ${ELASTICSEARCH_PASSWORD}
      protocol: https
      ingest_pipeline: test-pipeline
      ssl.verification_mode: none
      username: elastic

It should be just pipeline, Configure the Elasticsearch output | Filebeat Reference [8.5] | Elastic

@legoguy1000 Thank for the quick reply.
I've tried to add only pipeline already before but it throws stream error...
image

Also while monitoring the logs about filebeat, one more thing i found in logs :

object mapping for [agent] tried to parse field [agent] as object, but found a concrete value

What error?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.