Issues ingesting log files with fresh installation

Hello. I'm hoping someone can please help me with an indexing issue that i'm having. I'm working with a fresh setup of ElasticSearch and Kibana. I have everything setup, Kibana loads, I can log in with security, and the status page shows everything in the green. I am now attempting to load logs into ElasticSearch for the first time so that an index can be created. I have an ingest pipeline all setup and an index template loaded. Upon trying to load a log file into Elastic, I'm seeing this error quite frequently in my elastic log:

[2020-05-20T14:49:01,948][INFO ][o.e.a.b.TransportShardBulkAction] [logs-node-1] [2020-05-18][0] mapping update rejected by primary
java.lang.IllegalArgumentException: mapper [source.geo.location] of different type, current_type [geo_point], merged_type [ObjectMapper]

Not sure if this is a show stopper issue, but an index isn't getting created thus far and this is the only error that I see.

Here is part of my ingest pipeline where this is defined:

{
          "geoip": {
            "field": "ClientIP",
            "target_field": "source.geo",
            "properties": [
              "ip",
              "country_name",
              "continent_name",
              "region_iso_code",
              "region_name",
              "city_name",
              "timezone",
              "location"
            ]
          }
        }

And here is part of my relevant index template:

"source.geo": {
        "properties": {
           "ip": {
              "type": "ip"
           },
           "postal_code": {
              "type": "keyword"
           },
           "location": {
              "type": "geo_point"
           },

My inbound log file has a "ClientIP" field that should be triggering this. Any ideas as to why the data type of "geo_point" is having issues here? Please let me know if you need any additional information to assist me with this. Thanks in advance!

To provide a little more information, I tried creating a quick ingest pipeline and a small index to see what "location" is returning:

curl --user <user>:<password> -X PUT "##.##.##.##:9243/_ingest/pipeline/testgeoip" -H "Content-Type: application/json" -d '{"description" : "Add geoip info","processors" : [{"geoip" : {"field" : "ip"}}]}'
curl --user <user>:<password> -X PUT "##.##.##.##:9243/my_index/_doc/my_id?pipeline=testgeoip" -H "Content-Type: application/json" -d '{"ip":"8.8.8.8"}'

I then fetched the contents of the index:

curl --user : -X GET "##.##.##.##:9243/my_index/_doc/my_id"

    "_id": "my_id",
    "_index": "my_index",
    "_primary_term": 1,
    "_seq_no": 0,
    "_source": {
        "geoip": {
            "continent_name": "North America",
            "country_iso_code": "US",
            "location": {
                "lat": 37.751,
                "lon": -97.822
            }
        },
        "ip": "8.8.8.8"
    },
    "_type": "_doc",
    "_version": 1,
    "found": true
}

This gives me a valid lat/lon object back which should work just fine with the geo_point data type. So, I'm not clear why I'm getting this error while ingesting a log file. Any insights would be great! Thanks.

Any chance this is due to not having Logstash installed? I've got the latest version of ElasticSearch and Kibana installed already v7.7. I didn't think I needed it. The geoip and geo_point mapping "seem" to be common with Logstash, but it also appears to be available in the ingest processor. I've come across some information that the geo_point mapping has to be specifically declared - for example:

https://www.elastic.co/guide/en/elasticsearch/reference/current/geoip-processor.html

"Although this processor enriches your document with a location field containing the estimated latitude and longitude of the IP address, this field will not be indexed as a geo_point type in Elasticsearch without explicitly defining it as such in the mapping."

So, my mapping already has this, unless there is an extra step I have to take? Has anyone else come across this error before? Thanks.

@Andrew_Cholakian1 or @shahzad31 since you guys were helping me quite well with a couple of my other posts, if you wouldn't mind seeing if yourselves or anyone else from the Elastic Team could take a look at this issue, I would appreciate it. This is an important one for me to solve soon and it is driving me crazy :slight_smile: Thanks!