Hello,
I'm sending java.sql.Timestamp objects to Elastic (through Spark).
With default mapping everything works OK, dates are showing up as long numbers in the index (milliseconds).
However after changing the field mapping to this:
"submitDate": {
"format": "dateOptionalTime",
"type": "date"
},
I'm getting the following error:
Bad Request(400) - failed to parse [submitDate]; Bailing out..
Could someone explain what's Elastic expecting, which data type should be sent?
Thanks a lot,
@markcitizen, the Spark and Hadoop connectors only serialize date information into ISO 8601 Date Time format. If you increase your logging levels to trace within the Spark job, you should be able to inspect the format of the date that the connector is sending to Elasticsearch, and set your format accordingly.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.