How to assign weight attribute to a completion suggester field using Elasticsearch Hadoop connector (Spark/Databricks)

Hi!

We're successfully using the org.elasticsearch.spark.sql driver to upload our Databricks Spark tables as Elasticsearch indices. We now want to assign the weight attribute to one of our suggester fields in order to give a deterministic result (note that you can't sort on these types of fields). In the API, this is achieved by specifiying the weight attribute as part of each document as seen in the official documentation.

How do I do this using the es-hadoop driver? I tried assigning a column in my DataFrame called weight, but then I get an error because it no longer matches our strict mapping. I couldn't find any information about this. There is for example no obvious parameter.

    (
        data.write.format("org.elasticsearch.spark.sql")
        .options(**es_conf)
        .mode("overwrite")
        .save(full_index_name)
    )