We are using Elasticsearch v7.10 and use spark to write bulk documents to the index. I was under the impression that we can sign request by passing headers like beow:
df.write.mode("append").format('org.elasticsearch.spark.sql').\¬
option('es.nodes', 'xxxxxxxxxxxx').\¬
option('es.port',443).\¬
option('es.net.ssl', 'true').\¬
option('es.nodes.wan.only', 'true').\¬
option("es.net.http.header.x-amz-date", headers['x-amz-date']).\¬
option("es.net.http.header.x-amz-content-sha256", headers['x-amz-content-sha256']).\¬
option('es.net.http.header.Authorization',headers['Authorization']).\¬
option('es.mapping.id', 'id').\¬
option('es.resource', 'test').\¬
save()¬
However above doesn't work and returns an error . Would like to know if the adapater has built in support to sign the requests.
Client version used:
org.elasticsearch:elasticsearch-spark-30_2.12:8.6.1