Elasticsearch on Spark - Index Mapping

Hello everyone,

I'm trying to write a Spark job reading a CSV file and writing into Elasticsearch in Java:

I'm defining my a class object with the following instance variables :

  • productId
  • title

I have succeeded writing my data into Elasticsearch but I wish to specify that title shouldn't be analyzed.

How can I do that within the Spark job?

Thanks in advance!

You need to define the mapping before hand in Elasticsearch. There are various reasons why this is like this:

  1. it's a one time thing while a job can run multiple times.
  2. there is no clear life-cycle hook that the connector can use across all integrations to add the mapping. Also things like versioning, merging conflicts, etc... are not easy to resolve and outside the scope of the connector.
1 Like

That's what I suspect. Thanks for your answer!