Duplicate data with filebeat and write it into two indices

Hi,

I'm new to the forum so please excuse me if this post is in the wrong section.

I need some general help with Filebeat (beats in general).
The main goal is to send data from Filebeat duplicated to Elasticsearch.
Why? Because I need to anonymize data after a while and this data should be available for a long time. The non-anonymized data should be available for 7 days and then be deleted.

My plan was to do this with rollup jobs. However, these are to be removed in future versions. Also, these would probably not have been the right tool for this.

My second attempt was to use Filebeat to write the data to two indieces. Unfortunately, filebeat only writes one index and ignores the other. However, it does not throw any errors in the log and starts normally.

I have read through all the posts and just can't find a solution.
I am also relatively new to the subject and am probably a bit overwhelmed with the documentation from ELK which does not give me any clear clues as to how I could achieve my goal.

If you have a few clues as to how I could achieve this or have perhaps already done it yourself, I would be happy to receive some help.

Thank you very much

My filebeat.yml file:
At least part of it. Here only the Processor and elasticsearch.output that I used.
Please keep in mind that the actual function of sending logs works.

processors:
  # Add a field to identify original log entries
  - add_fields:
      target: ""
      fields:
        log_type: "original"
  # Copy the fields to create a duplicate event
  - copy_fields:
      fields:
        - from: "message"
          to: "duplicated_message"
      fail_on_error: false
      ignore_missing: true
  # Add a field to identify duplicated log entries
  - add_fields:
      when.equals:
        fields:
          log_type: "original"
      target: ""
      fields:
        log_type: "duplicate"
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: [myip:myport]

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "myapikey"
  username: "myuser"
  password: "mypw"

  ssl.certificate_authorities: ["path_to"]
  allow_older_versions: true
  indices:
    - index: "filebeat-original-logs"
      when.equals:
        log_type: "original"
    - index: "duplicate-logs-%{[agent.version]}-%{+yyyy.MM.dd}"
      when.equals:
        log_type: "duplicate"

This is not possible when using Filebeat configured to send data to Elasticsearch, you need to use Logstash.

Configure your Filebeat to send logs to Logstash, then in Logstash you can have multiple outputs.

Thanks for your reply.
I did what you sad. I installed Logstash and Logstashs Output is being send into one ingest pipeline.

@Black-Shadow I would say you can just run multiple instances of filebeat using different configuration files as a work arround.