Unable to read flattened type field using pyspark connector

After reading index from elasticsearch using pyspark in databricks - All fields appeared but only one field which is a type of 'flattened' type not appeared in dataframe schema. Is there any option that i need to include during reading. Following is my snippet to read from the index.

spark.read.format("org.elasticsearch.spark.sql")\
      .option("es.nodes", ",".join(db['nodes']))\
      .option("es.mapping.date.rich", "false")\
      .option("es.net.http.auth.user", 'abc')\
      .option("es.net.http.auth.pass", '123abc')\
      .load('indexname')

elasticsearch: 7.5.1
spark: 2.4.3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.