Duplicate metadata from _id into a new field

I have a Filebeat data pipeline that ingests metrics into an Elasticsearch index. In a separate Python application, I fetch some of this data using the SQL API to display it in a web application. The problem arises when I need to perform bulk edits. I have the BULK API configured for this process, but one of the requirements is to provide the _id metadata. However, I cannot extract this field in the query.

I tried the following solution:


PUT _ingest/pipeline/cm_metadata_fields
{
  "processors": [
    {
      "script": {
        "description": "This processor duplicates the metadata of the ID into a new field within the _source field of each document",
        "lang": "painless",
        "source": """
            ctx['doc_id'] = ctx['_id'];
          """
      }
    }
  ]
}

PUT _index_template/cmdb
{
  "index_patterns": ["mdb-test*"],
  "template": {
    "settings": {
      "index.default_pipeline": "cm_metadata_fields"
    }
  }
}

However, when ingesting the data, it does not copy the _id ; it saves it as null . Is there a similar option available?