I have 14 years of data which loaded great into Logstash 5.4. We began seeing discrepancies in doc counts in both v5.4 and v6.2. We decided to add new columns to the data and reload from scratch using Logstash CSV. Our data has a lat and a long column. We are aware of proven method in the forums which we've used with no hassle. 1. have an index template which sets the geo_point field and 2. combine the fields and mutate as Float type in the conf file - this actually works (but only for SOME of our data)...
PROBLEM: Loading into Logstash v6..some years load perfectly while others throw "illegal latitude" for some records but not all. The docs not processed are within the bounds for latitude (-180 / 180).
What is wrong with the content in these fields? Here is our setup
SAMPLE OF ERRORS:
"reason"=>"illegal latitude value [-111.5913] for from_location"}}}}}
"reason"=>"illegal latitude value [-99.1967] for from_location"}}}}}
"reason"=>"illegal latitude value [-121.5548] for from_location"}}}}}
INDEX TEMPLATE:
PUT /_template/fti
{
"index_patterns": ["fti*"],
"settings": {
"number_of_shards": 1
},
"mappings": {
"doc": {
"properties": {
"from_location": {
"type": "geo_point"
}
}
}
}
}
LOGSTASH CONF FILE SUB-SECTION:
mutate {
convert => { "FROM_LID_FAC_Latitude" => "float" }
}
mutate {
convert => { "FROM_LID_FAC_Longitude" => "float" }
}
mutate {
add_field => {"from_location" => "%{FROM_LID_FAC_Latitude},%{FROM_LID_FAC_Longitude}"}
}