Mapping field as geo_point

I am having some issues with the logstash geoip plugin that I think stem from a lack of understanding with respect to how the elasticsearch output plugin and index mappings interact. Specifically, I need to ensure that source.geo.location and destination.geo.location are both recgonized as the geo_point datatype, and I'm having a hard time figuring out exactly what is wrong.

How do I properly format destination.geo.location so the data will be mapped as geo_point without me having to create a separate index/mapping in elasticsearch? The index I want to use already has a mapping for this field of the correct type, but when I send these events to it a new field is created for destination.geo.location.lat and destination.geo.location.long instead of the expected behavior of populating the appropriate field with the geo_point data I need. I'm assuming I need a mutate filter here to rearrange some fields so they'll be correctly interpreted, but I'm not sure and could use some pointers.

Here is the configuration for the geoip plugin in logstash:

....
   if "WAN" in [rule.name] {
        geoip {
            source => "source.ip"
            target => "source.geo"
        }
    }

    if "LAN" in [rule.name] {
        geoip {
                source => "destination.ip"
                target => "destination.geo"
        }
    }

And here's some example output of the contents of destination.geo:

 "destination.geo" => {
           "region_name" => "Washington",
           "region_code" => "WA",
             "city_name" => "Seattle",
                    "ip" => "76.223.92.165",
         "country_code2" => "US",
             "longitude" => -122.3032,
              "location" => {
            "lon" => -122.3032,
            "lat" => 47.54
        },
              "timezone" => "America/Los_Angeles",
              "dma_code" => 819,
          "country_name" => "United States",
              "latitude" => 47.54,
         "country_code3" => "US",
        "continent_code" => "NA",
           "postal_code" => "98108"
    }

If the elasticsearch mapping says that location is a geo_point then it should get created as a geo_point regardless of whether logstash sends it an object containing lat and lon, a string, an array of numbers or anything else that can reasonably be parsed into lat and lon. If the mapping is wrong there is nothing you can do in logstash to fix it. This is an elasticsearch issue.

So it turned out in the end that I misunderstood what was happening with that index.

Elasticsearch was creating the index dynamically. After a bunch of research, I learned that the way to fix this (at least the way I found) was to create a new index template in the dev console:

PUT _index_template/unifi-firewall
{
  "index_patterns": [
    "unifi-firewall*"
  ],
  "template": {
    "mappings": {
      "properties": {
        "destination" : {
          "properties" : {
            "geo" : {
              "properties" : {
                "location" : {
                  "type" : "geo_point"
                }
              }
            }
          }
        },
        "source" : {
          "properties" : {
            "geo" : {
              "properties" : {
                "location" : {
                  "type" : "geo_point"
                }
              }
            }
          }
        }
      }
    }
  }
}

And make sure that the elasticsearch output plugin in my logstash pipeline sent the data to an index that matched the pattern "unifi-firewall*"

index => "unifi-firewall-%{+YYYY.MM.dd}"

With those changes, I can plot these firewall logs on the network map in the Security app:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.