GeoIP Filter ingest: does not contain field types: geo_point

Hi, No Compatible Fields: The "logstash-*" index pattern does not contain any of the following field types: geo_point when displaying map.
How ever, GeoIP Ingest is working fine ?
ELK 5.6 suite.

geoIP lat et long are filed per ingest pipeline, pipeline fills country-code, lat and long but i have an error when displaying the MAP as geo_point does not exist ? thanks to help. Regards

PUT _ingest/pipeline/geoip
{
"description" : "Add geoip info",
"processors" : [
{
"geoip" : {
"field" : "Varx"


}
}
]
}

here a sample value that illustrates the fields

"geoip": {
"continent_name": "North America",
"city_name": "San Francisco",
"country_iso_code": "US",
"region_name": "California",
"location": {
"lon": -122.4194,
"lat": 37.7749
}

The structure of the data looks correct, but how this is indexed depends on the mapping. Logstash provides a standard index template which applies to indices matching the pattern logstash-*. You can look at this when creating an index template for your indices.

the Get logstash-2018.01.08 for today and geo ip return this. _Please not i got and issue of conflict mapping and decided to restart the config (move logs to usb key, i use a RPI3) . The map has worked on same conf.
When i did the setup ,

  1. run all the applications, 2) create the logstash index from repo 3) Add my specific fields (P_UT /.kibana/mapping/syslog
    {
    _ "properties": {_
    _ "Varx": {_
    _ "type": "text"_
    _ }_
    _ }_
    ) for instance ,
  2. create the geoip pipeline. 5) consume some data and it ran.

Why the map does not want to process this geoip struct ? yes my index is \logstash-*_

"geoip": {
"properties": {
"city_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"continent_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"country_iso_code": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"location": {
"properties": {
"lat": {
"type": "float"
},
"lon": {
"type": "float"
}
}
},
"region_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},

Not sure to understand what i did previously (that i did not reproduce)..? thanks to point out christian

for info : i redid the setup twice (cleaning all datas, restarting) still the same results. all good except the MAP. I confirm too that pipeline filter is working....

It looks like you do not have an index template stored that will apply the correct mapping. Are you using Logstash to ingest the data? If so, what does your Elasticsearch output block look like?

yes christian, syslog-ng to logstash to elastisearch to kibana. all 5.6 is running fine except the map.
**If so, what does your Elasticsearch output block look like?**c: don't know how to ! but
here the result of a full get logstash-2018.01.08 from kibana console if it helps....

"logstash-2018.01.08": {
"aliases": {},
"mappings": {
"syslog": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"CPU": {
"type": "long"
},
"Disk": {
"type": "long"
},
"Port": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"RAM": {
"type": "long"
},
"Temp": {
"type": "long"
},
"Varx": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"geoip": {
"properties": {
"city_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"continent_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"country_iso_code": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"location": {
"properties": {
"lat": {
"type": "float"
},
"lon": {
"type": "float"
}
}
},
"region_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"host": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"message": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"received_at": {
"type": "date"
},
"received_from": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"syslog_facility": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"syslog_facility_code": {
"type": "long"
},
"syslog_hostname": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"syslog_message": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"syslog_pid": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"syslog_program": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"syslog_severity": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"syslog_severity_code": {
"type": "long"
},
"syslog_timestamp": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
},
"settings": {
"index": {
"creation_date": "1515408463158",
"number_of_shards": "5",
"number_of_replicas": "1",
"uuid": "XehJqfE2TFi6juEdTuJ8aA",
"version": {
"created": "5060499"
},
"provided_name": "logstash-2018.01.08"
}
}
}
}

I was referring to the Elasticsearch output block in your Logstash configuration. This usually adds the index template.

I just comment the line stdout { codec => rubydebug } recently to avoid too much log.

output {
#Default output
elasticsearch { hosts => ["http://localhost:9200"] }
#stdout { codec => rubydebug }

#GROK parse failure log file
if "_grokparsefailure" in [tags] {
    file { path => "/var/log/logstash/grokparsefailure" }
}

}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

Hi i add this additionnal info that can not be processed and it's now running. May be i did more change as i start like this before to understand pipeline filter. how ever VarX not set in logstash so not sure this is the good settings even if it correct the issues. do you see something more clever ? (Var X is the PI variable set per script after logstash registration and before applying pipeline)

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
geoip {
source => "VarX"
}
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.