Creating geoip data for internal networks

I have many internal users, and both public and private address space in use. I would like to be able to usefully separate my networks from the country we're in, and provide sensible geoip data.

For my public range, I can easily fix the text fields I want to change (eg we get the wrong timezone, but that doesn't affect @timestamp; and I change [geoip][country_name] to a private value), and for my 10.0.0.0/8 network I can create the [geoip] structure I need.

Unfortunately I can't figure out how to make a geo_point value that gets in to Elasticsearch ... I need to create a geo_point for the internal network, and would like to fix the existing ones for the public range. I keep ending up with {"type":"double"} instead of {"type":"geopoint"}, even when I specify an output template for Elasticsearch.

Can someone please point me at an example of creating a geo_point from scratch, and getting it into ES?

Basically a geo point should be lat then lon, and if the field it mapped as a geopoint then it should all fall into place.

Can you post your relevant LS config excerpts? The mapping sections might also be useful too.

After a long time away from this issue, I've managed to get the right setup.

In my logstash config, after I've called geoip {} I look for my IP address ranges and manually repopulate the various fields. I'm effectively declaring a new country.

if [srcip] =~ /^139\.80\./ or [srcip] =~ /^10\./ {
   mutate { replace      => { "[geoip][timezone]"      => "Pacific/Auckland" } }
   mutate { replace      => { "[geoip][country_name]"  => "University of Otago" } }
   mutate { replace      => { "[geoip][country_code2]" => "UO" } }
   mutate { replace      => { "[geoip][country_code3]" => "UoO" } }
   mutate { remove_field => [ "[geoip][location]" ] }
   mutate { add_field    => { "[geoip][location]"      => "170.525" } }
   mutate { add_field    => { "[geoip][location]"      => "-45.865" } }
   mutate { convert      => [ "[geoip][location]",        "float" ] }
   mutate { replace      => [ "[geoip][latitude]",        -45.856 ] }
   mutate { convert      => [ "[geoip][latitude]",        "float" ] }
   mutate { replace      => [ "[geoip][longitude]",       170.525 ] }
   mutate { convert      => [ "[geoip][longitude]",       "float" ] }
}

To help keep me honest (actually, to make sure that all my settings are visible in the configuration) I explicitly set a template when writing to ES, rather than rely on state that's already there.

elasticsearch { embedded => "false"
  cluster => "myclustername"
  protocol => "transport"
  host => "indexing host"
  index => "test" # index name must be in lowercase!
  template => "/etc/logstash/template.d/test"
  template_name => "test"
  template_overwrite =>  "true"
}

(Every time I drop this index, the new template will be used to re-create it.)

For my purposes, I'm using the template to switch off string analyzing (most of the fields are log data and analysis doesn't help), and to make sure I get geo_points done properly. Here's the full template from the dev box :-

{
  "order" : 0,
  "template" : "test*",
  "settings" : { "index.refresh_interval" : "5s" },
  "mappings" : {
    "_default_" : {
      "dynamic_templates" : [
        {
          "message_field" : {
            "mapping" : { "index" : "analyzed", "omit_norms" : true, "type" : "string" },
            "match_mapping_type" : "string",
            "match" : "message" }
          },
        {
          "string_fields" : {
            "mapping" : { "index" : "not_analyzed", "ignore_above" : 256, "type" : "string" },
            "match_mapping_type" : "string",
            "match" : "*" }
          }
       ],
      "properties" : {
        "geoip" : {
          "dynamic" : true,
          "path" : "full",
          "properties" : { "location" : { "type" : "geo_point" } },
          "type" : "object" },
        "@version" : { "index" : "not_analyzed", "type" : "string" }
      },
      "_all" : { "enabled" : true }
    }
  },
  "aliases" : { }
}

The end result is a custom country, with a geo-location that separates my network from the users in the same city (although the point on the map when you zoom in isn't as accurate as I want it to be, I don't think I'll worry about that!)

6 Likes

That's super awesome!

Nice work @jim :smiley:

Good Morning Little help Here I have difficulty to maluplate city names in kibana 3 ,
I have servers in internal network i have configured my logstash configuration as follows

if "a4" in [site] {mutate {add_field => { city => "Chicago" st => "IL" lat => "41.7897125" lng => "-87.68365" division => "Central" region => "Chicago" }}}

else if "ab" in [site] {mutate {add_field => { city => "Albuquerque" st => "NM" lat => "35.1101413" lng => "-106.5579597" division => "Western" region => "Mile High" }}}

else if "ag" in [site] {mutate {add_field => { city => "Augusta" st => "GA" lat => "33.4734978" lng => "-82.0105148" division => "Central" region => "Big South" }}}

Results
I'm able to see All Events with the data table Every host = state
City = Augusta
Division = Central

Please help to understand as pretty new to using logstash configuration.

Thanks in Advance

@inf2ravikumar, please start a new thread for your unrelated question.

Sure I'will do that sorry about it.

Dosen't work:
`The given configuration is invalid. Reason: Expected one of #, => at line 34, column 8 (byte 2419) after filter
...

if {:level=>:fatal}`

Please give more context. There's nothing wrong the quoted line (except that the regexp is a bit sloppy and should escape the dots).

There's another thread on this one.

Hi @bastianhoss,
Unsure if you fixed this or not, but I ran into this issue yesterday so I'll post what I wound up doing for future searches...

 if [client_address] =~ /^10\./  {
  mutate { replace      => { "[geoip][timezone]"      => "Pacific/Auckland" } }
  mutate { replace     .....
} else {
  geoip {
    source => "client_address"
    target => "geoip"
    add_tag => [ "nginx-geoip" ]
  }
}