Geo point type error for map visualization in kibana

Hi,
I am getting error for geo_point type in kibana visualizations.It's showing to set the fielddata=true,That setting of fielddata=true is coming after every rollover of the index.
PUT indexname/_mapping
{
"properties" : {
"destination" : {
"properties" : {
"geo" : {
"properties" : {
"location" : {
"type" : "geo_point",
"fielddata" : true
}

      }
    
      
    }

}
}
}
}
By running above API in kibana devtool showing below error but fielddat= true is available for text fields but it's showing for geo_point.Please help me, how to resolve this issue.
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Mapping definition for [location] has unsupported parameters: [fielddata : true]"
}
],
"type": "mapper_parsing_exception",
"reason": "Mapping definition for [location] has unsupported parameters: [fielddata : true]"
},
"status": 400
}

Thank you

It looks like your mapping is incorrect. You do not need to set fielddata when declaring your geo_point. Try

PUT indexname/_mapping
{
"properties" : {
"destination" : {
"properties" : {
"geo" : {
"properties" : {
"location" : {
"type" : "geo_point",
}
}
}
}
}
}
}

Hey @Nathan_Reese
she's asking about the problem after the rollover of the index happened, that's where she got the problem of setting field value true .
As far i knew the elastic, we don't set the field values for geo_point . But this is a wierd behaviour of elastic asking to set the field value for geo_point.

she's asking about the problem after the rollover of the index happened, that's where she got the problem of setting field value true .

This sounds like a bug with rollover. Can you provide some details about your environment and configuration and how this occurred?

Sure , Will provide you
The version of elastic and kibana are 7.5.2
logstash version is of 7.6 .
Well I have used the Geoip filter plugin for the fields enrichment .
Steps :

  1. Created a mapping with the geoip fields with type geo_point.
  2. Indexed the data to the elasticsearch and created maps in kibana.

{
"took": 617,
"timed_out": false,
"_shards": {
"total": 21,
"successful": 12,
"skipped": 0,
"failed": 9,
"failures": [
{
"shard": 0,
"index": "logstashflowdata-000005",
"node": "AqAZ_WKARj6a18vAjJNt_Q",
"reason": {
"type": "query_shard_exception",
"reason": "field [destination.geo.location] is not a geo_point field",
"index_uuid": "GqTtWSnXSWKunprDR8quqQ",
"index": "logstashflowdata-000005"
}
},
{
"shard": 0,
"index": "logstashflowdata-000006",
"node": "AqAZ_WKARj6a18vAjJNt_Q",
"reason": {
"type": "query_shard_exception",
"reason": "field [destination.geo.location] is not a geo_point field",
"index_uuid": "c4WaClzxS7yJ4Arx3M9BlA",
"index": "logstashflowdata-000006"
}
},
{
"shard": 0,
"index": "logstashflowdata-000007",
"node": "AqAZ_WKARj6a18vAjJNt_Q",
"reason": {
"type": "query_shard_exception",
"reason": "field [destination.geo.location] is not a geo_point field",
"index_uuid": "ubX3KYBwTuOaHt0pUcN9zg",
"index": "logstashflowdata-000007"
}
}
]
},
"hits": {
"total": 0,
"max_score": null,
"hits":
},
"aggregations": {
"filter_agg": {
"2": {
"buckets":
},
"doc_count": 0
}
}
}

This the response now i'm getting for these indices but , before to it it was to the geo_point only .

What changed that caused the mapping to get updated? What action was taken before the problem occurred?

What does your logstash configuration look like?

This is the sample conf . I'm showing , those are the geo ip filter i have used.

input {
udp {
host => "localhost"
port => 2055
codec => netflow
{
versions => [5, 7, 9]
}
type => netflow
}
}

filter{
geoip {
source => "[source][ip]"
target => "source_geoip"
}
geoip {
source => "[destination][ip]"
target => "destination_geoip"
}

if [destination_geoip][location][lon] and [destination_geoip][location][lat] {
mutate { add_field => { "[destination][geo][location]" => "%{[destination_geoip][location][lon]} , %{[destination_geoip][location][lat]}" }}
}
if [source_geoip][location][lon] and [source_geoip][location][lat] {
mutate { add_field => { "[source][geo][location]" => "%{[source_geoip][location][lon]} , %{[source_geoip][location][lat]}" }}
}
}

output to elasticsearch

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.