I've deployed eck cluster (Kibana, elasticsearch, filebeat) and want to pass old logstash filters (logstash.conf) as ingest pipeline in elasticsearch. Fortunately, I am able to do the same with different processors.
when i've tested the ingest pipeline with actual index. It was working smooth as expected. It was generating new fields from the "message" field.
also didn't faced any syntax issue, while I've configured filebeat config to use "ingest_pipeline: test-pipeline".
But while I was searching for a specific field which has to be created from the message field through ingest pipeline, it wasn't there.
I don't know how to debug and solve this issue ?? here is filebeat.yaml
    filebeat.idle_timeout: 10s
    filebeat.spool_size: 1024
    logging.level: info
      - type: log
        paths:
          - /var/log/apache2/*
        fields_under_root: true
        fields:
          doctype: apache_access
......
other conf. logs 
....
    processors:
    - add_kubernetes_metadata:
        host: ${NODE_NAME}
        matchers:
        - logs_path:
            logs_path: "/var/log/containers/"
    output.elasticsearch:
      host: ${NODE_NAME}
      hosts:
      - elasticsearch-test-es-default:9200
      password: ${ELASTICSEARCH_PASSWORD}
      protocol: https
      ingest_pipeline: test-pipeline
      ssl.verification_mode: none
      username: elastic