Can't produce a geo_point or geo_shape data type for JSON geospatial data via Elasticsearch viewable on Kibana

OS: CentOS 7
OS: CentOS 7
[root@localhost logstash]# curl -XGET 'http://localhost:9200'
{
"name" : "primary-node",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "0tXluLtmSmyY9_hj-XOfrQ",
"version" : {
"number" : "6.8.6",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "3d9f765",
"build_date" : "2019-12-13T17:11:52.013738Z",
"build_snapshot" : false,
"lucene_version" : "7.7.2",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Trying to accomplish

1.) Push JSON data with geospatial column from logstash into elasticsearch to be visible in Kibana. Kibana requires a geo_point data type
and I am unable to transform the data appropriately.
2.) The template that I am referencing is not being created, and I don't know why?

(05:46:58pm)-(postgres@localhost)-(~)
-> curl http://localhost:9200/_template/pfdb?pretty
{ }

  • The Process is using the default template doc instead of elasticsearch-template-es7x.json as specified in the configuration file.

[root@localhost elasticsearch]# more elasticsearch-template-es7x.json
{
"index_patterns" :["pfdb*", "logstash-"], <--- Changed this
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s",
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings" : {
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "
",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date" },
"@version": { "type": "keyword" },
"location": { "type": "geo_shape" }, <-- Changed this as well
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}

  • Sample of JSON Data ingested read in by stdin.

{"Locid":1,"Country":"O1","Region":"","City":"","Postalcode":"","Latitude":0.0000,"Longitude":0.0000,"Location":{"type":"Point","coordinates":[0,0]},"Metrocode":null,"Areacode":null}
{"Locid":2,"Country":"AP","Region":"","City":"","Postalcode":"","Latitude":35.0000,"Longitude":105.0000,"Location":{"type":"Point","coordinates":[105,35]},"Metrocode":null,"Areacode":null}
{"Locid":3,"Country":"EU","Region":"","City":"","Postalcode":"","Latitude":47.0000,"Longitude":8.0000,"Location":{"type":"Point","coordinates":[8,47]},"Metrocode":null,"Areacode":null}
{"Locid":4,"Country":"AD","Region":"","City":"","Postalcode":"","Latitude":42.5000,"Longitude":1.5000,"Location":{"type":"Point","coordinates":[1.5,42.5]},"Metrocode":null,"Areacode":null}
{"Locid":5,"Country":"AE","Region":"","City":"","Postalcode":"","Latitude":24.0000,"Longitude":54.0000,"Location":{"type":"Point","coordinates":[54,24]},"Metrocode":null,"Areacode":null}
{"Locid":6,"Country":"AF","Region":"","City":"","Postalcode":"","Latitude":33.0000,"Longitude":65.0000,"Location":{"type":"Point","coordinates":[65,33]},"Metrocode":null,"Areacode":null}
{"Locid":7,"Country":"AG","Region":"","City":"","Postalcode":"","Latitude":17.0500,"Longitude":-61.8000,"Location":{"type":"Point","coordinates":[-61.8,17.05]},"Metrocode":null,"Areacode":null}

[root@localhost logstash]# more logstash.conf Version: logstash 6.8.6
input {
stdin {}
}

filter {
mutate { add_field => { "[@metadata][_index]" => "pfdb.location.locid.static_y2020-02_v1" } }
mutate { add_field => { "[@metadata][_type]" => "location" } }
mutate { convert => { "[Location][coordinates]" => "float" } }
mutate { rename => { "Location" => "[location]" } }
mutate { remove_field => ["@version"] remove_field => ["@timestamp"] }
mutate { replace => [ "message", "%{message}" ] gsub => [ 'message','\n',''] }
if [message] =~ /^{.*}$/
{ json { source => message } }
}

output {

We build the "new" elasticsearch index in the default cluster

elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][_index]}"
action => "index"
manage_template => "true"
template => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/elasticsearch-template-es7x.json"
template_name => "pfdb"
ilm_enabled => false
template_overwrite => "true"
doc_as_upsert => false
codec => json
}
}

  • Results in index being created and listed in Kibana --> Index Name

Index Name: pfdb.location.locid.static_y2020-02_v1

but I'm getting the following error message.

[2020-02-27T17:21:31,236][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [primary-node] failed to put mappings on indices [[[pfdb.location.locid.static_y2020-02_v1/EYsJarCuTo-fm9EcrNSQJw]]], type [doc]
java.lang.IllegalArgumentException: mapper [Location.coordinates] cannot be changed from type [float] to [long]
at org.elasticsearch.index.mapper.MappedFieldType.checkTypeName(MappedFieldType.java:145) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MappedFieldType.checkCompatibility(MappedFieldType.java:158) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.FieldTypeLookup.validateField(FieldTypeLookup.java:157) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.FieldTypeLookup.copyAndAddAll(FieldTypeLookup.java:108) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:494) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) [elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) [elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-6.8.6.jar:6.8.6]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
[2020-02-27T17:21:31,246][DEBUG][o.e.a.b.TransportShardBulkAction] [primary-node] [pfdb.location.locid.static_y2020-02_v1][2] failed to execute bulk item (index) index {[pfdb.location.locid.static_y2020-02_v1][doc][AM29iHABwGAXSffJlRrw], source[{"message":"{"Locid":24,"Country":"BF","Region":"","City":"","Postalcode":"","Latitude":13.0000,"Longitude":-2.0000,"Location":{"type":"Point","coordinates":[-2,13]},"Metrocode":null,"Areacode":null}","Country":"BF","host":"localhost.localdomain","Postalcode":"","City":"","Longitude":-2.0000,"Latitude":13.0000,"Location":{"type":"Point","coordinates":[-2,13]},"Locid":24,"Region":"","Metrocode":null,"Areacode":null}]}
java.lang.IllegalArgumentException: mapper [Location.coordinates] cannot be changed from type [float] to [long]
at org.elasticsearch.index.mapper.MappedFieldType.checkTypeName(MappedFieldType.java:145) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MappedFieldType.checkCompatibility(MappedFieldType.java:158) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.FieldTypeLookup.validateField(FieldTypeLookup.java:157) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.FieldTypeLookup.copyAndAddAll(FieldTypeLookup.java:108) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:494) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:338) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:330) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:231) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-6.8.6.jar:6.8.6]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-6.8.6.jar:6.8.6]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.