How to use ingest pipelines with logstash

I want to use the Apache access ingest pipeline with Filebeat and with Logstash (not directly to Elasticsearch).

When I put direct output to Elasticsearch in the filebeat file (filebeat.yml) everything is fine, I can see logs already changed with the ingest pipeline in Kibana, for example with a source.ip field that they would not otherwise have.
But when I want to follow this documentation, nothing happens with Logstash as an intermediary, I still have a common Apache log, I think (at least they don't have the fields that I do see when I output Elasticsearch directly).

This is my logstash.conf:

input {
  beats {
    port => 5044
  }
  syslog {
    port => 5000
  }
  stdin { }
}

output {
  if [@metadata][pipeline]{
    elasticsearch {
      hosts => ["elasticsearch:9200"] # uses HTTP protocol
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      pipeline => "%{[@metadata][pipeline]}"
    }
  } else {
    if [@metadata][beat] {
      elasticsearch {
        hosts => ["elasticsearch:9200"] # uses HTTP protocol
        manage_template => false
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      }
    } else {
      elasticsearch {
        hosts => ["elasticsearch:9200"] # uses HTTP protocol
        manage_template => false
        index => "logstash-%{+YYYY.MM.dd}"
      }
    }
  }
  file {
    path => "/tmp/logstash-output.log"
    codec => rubydebug
  }
}

I am running all 3 ELK containers on the same machine, with this docker-compose.yml:

version: "3.7"
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.10
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:7.17.10
    container_name: logstash
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
      - ./logstash/logs:/tmp
    ports:
      - 5044:5044
      - 5000:5000
      - 9600:9600
    networks:
      - elk
    depends_on:
      - elasticsearch
    environment:
      - PIPELINE_ECS_COMPATIBILITY=v8
      - CONFIG_RELOAD_AUTOMATIC=true
      - LOG_LEVEL=warn
  
  kibana:
    image: docker.elastic.co/kibana/kibana:7.17.10
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    ports:
      - 5601:5601
    networks:
      - elk
    depends_on:
      - elasticsearch

volumes:
  esdata:

networks:
  elk:
    driver: bridge

Finally, in another VM where I have an Ubuntu Server with an Apache and Filebeat, I run this setup (my PC with ELK containers have ip 192.168.1.26) to load the pipelines to my Elasticsearch.

filebeat setup --pipelines --modules apache -E  output.logstash.enabled=false -E output.elasticsearch.hosts=["192.168.1.26:9200"]

I don't know what part of the stack I need to understand to understand what I'm doing wrong. I think I'm doing everything right.

I think that's all the most important information. Thanks!

If you change the rubydebug codec on the file output to use { metadata => true } then do the events include the [@metadata][pipeline] field?