Pyspark Keeps Enforcing Type Text For Geo-Point, Despite Enforced Mapping

Hi guys,
I have been trying to index some data, with a geo_point value into Elasticsearch for a coordinate map visualization on Kibana, using Spark, mainly the es-hadoop connector. Both the connector and ELK versions are 7.9.3. And I have already referred to a similar problem referred on this forum earlier: Elastic spark connector not able map longitude ,latitude values to geo_type. Please note that my location value in spark is already a string, in the format "lat, long"
Here is my mapping:

    {  
    "settings": {
            "index": {
                "number_of_shards": "1",
                "number_of_replicas": "1",
                    "analysis": {
                        "analyzer": {
                            "analyzer-name": {
                            "filter": "lowercase",
                            "type": "custom",
                            "tokenizer": "keyword"
                        }
                    }
                }
            }
        },
    "mappings": {
          "properties": {
            "booking_end_at": {
              "type": "date"
            },
            "booking_location": {
             "type": "geo_point"
            },
            "booking_start_at": {
              "type": "date"
            },
            "created_at": {
              "type": "date"
            },
            "updated_at": {
              "type": "date"
            }
          }
        }
    }

and the code I am using for indexing is here:

    df_bookings.write.format(
        "org.elasticsearch.spark.sql"
    ).option(
        "es.resource", '%s/%s' % ('bookings', 'docs')
    ).option(
        "es.nodes", 'localhost'
    ).option(
        "es.port", 9200
    ).option(
        "es.mapping.id", 'id'
    ).option(
        "es.nodes.discovery", "false"
    ).option(
        "es.nodes.wan.only", "true"
    ).option(
        "es.http.timeout", "10m"
    ).option(
        "es.write.operation", "index"
    ).mode("append"
    ).save()

Yet, I still get the following error from Pyspark:

org.elasticsearch.hadoop.rest.EsHadoopRemoteException: illegal_argument_exception: mapper [booking_location] cannot be changed from type [geo_point] to [text].

I am guessing the connector is enforcing the text data type, while the mapping has already imposed the geo_point. How can I solve this? Thanks in advance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.