This seems more like a generic Spark question. If you're asking how to parse JSON using Scala, that can be done hundreds of different ways, all of which are reasonable solutions. That's generally not a helpful tip though so to give you a starting spot: ES-Hadoop uses the Jackson JSON libraries to parse JSON into objects and vice versa. I would peruse that library as a good place to start.
As for ensuring the mapping is what you want, I would suggest either creating the index before running your job with your desired mapping. If you don't want to precreate the index every time and would rather ES-Hadoop do that, you can create an index template in Elasticsearch that will assign the mapping you want to any index who's name matches its pattern.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.