There's no functionality in ES-Hadoop to force a certain type conversion outside the existing type and that is on purpose.
Elasticsearch can do that much more reliably and better than the connector by declaring the mapping (sometimes templates help a lot) apriori.
No. The schema is simply Spark's representation of the data. The mapping in ES, is its own.
The conversion in ES relies on conventions and if needed, is pluggable through ValueReader/ValueWriter.
By the way, have you tried using the es.mapping.date.richparameter introduced in 2.1?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.