Logstash Setup with GeoIP

I am attempting to setup geoip on Logstash 7.3.0. I have seen an old post on how to set it up, but I want to make sure I'm doing it right. In the logstash config file I add the below code. Is this correct?

filter {
geoip {
source => "source.ip"
target => "src_geoip"
}
geoip {
source => "destination.ip"
target => "dst_geoip"
}
}

Thanks in advance

Any advice would be appreciated.

Let me ask this a different way. I have private IPs I want to enrich with data, but not sure to achieve this. Any guidance would be grateful.

Can someone give me advice to get started?

If your field name contains a period then this is correct. If the source object contains a field called ip then you should use "[source][ip]".

To have the fields [src_geoip][location] be a geo_point you will need an index template that tells elasticsearch that.

If by "private" IPs you mean addresses in the blocks reserved by RFC 1918 then you will have to build your own database in which to look them up.

@Badger thank you for your response. I have a couple of followup questions.

To have the fields [src_geoip][location] be a geo_point you will need an index template that tells elasticsearch that.

I thought I read that the default index did the geo_point?

Can I set up something like below in my Logstash config file in the filter section?

	if [source.ip] =~ /^10\.3\./  {
   mutate { replace      => { "[geoip][timezone]"      => "Eastern/New York" } }
   mutate { replace      => { "[geoip][country_name]"  => "Office" } }
   mutate { replace      => { "[geoip][country_code2]" => "O" } }
   mutate { replace      => { "[geoip][country_code3]" => "O" } }
   mutate { remove_field => [ "[geoip][location]" ] }
   mutate { add_field    => { "[geoip][location]"      => "-106.158" } }
   mutate { add_field    => { "[geoip][location]"      => "109.768" } }
   mutate { convert      => [ "[geoip][location]",        "float" ] }
   mutate { replace      => [ "[geoip][latitude]",        109.768 ] }
   mutate { convert      => [ "[geoip][latitude]",        "float" ] }
   mutate { replace      => [ "[geoip][longitude]",       -106.158 ] }
   mutate { convert      => [ "[geoip][longitude]",       "float" ] }
}

The default template for an elasticsearch output defines the location field of an object called geoip to be a geo_point. That matches the default target for the geoip filter. If you use a different target then you need a different template.

You could build a geoip object like that, yes.

When i try to implement that into the Logstash config file I get a Signt error and the main pipleline stops. I can't seem to find how to do this on 7.3.0. What am i doing wrong?

Thank you for your help.

Hallo,

if you are using filebeat you should look at https://www.elastic.co/guide/en/ecs/current/ecs-geo.html

the new filebeat index template (dont know how long, for me 7.3.0) does not use geoip.location but geo.location as its geo_point field.

so you may use something like
target => "[destination][geo]"

and the field will fit the template.

lcer

So, my Logstash config filter section should look like this then?

filter {
   geoip {
       source=>"source.ip"
       target=>"[destination][geo]"
            }
  	if [source.ip] =~ /^10\.3\./  {
   mutate { replace      => { "[geoip][timezone]"      => "Eastern/New York" } }
   mutate { replace      => { "[geoip][country_name]"  => "Office" } }
   mutate { replace      => { "[geoip][country_code2]" => "O" } }
   mutate { replace      => { "[geoip][country_code3]" => "O" } }
   mutate { remove_field => [ "[geoip][location]" ] }
   mutate { add_field    => { "[geoip][location]"      => "-106.158" } }
   mutate { add_field    => { "[geoip][location]"      => "109.768" } }
   mutate { convert      => [ "[geoip][location]",        "float" ] }
   mutate { replace      => [ "[geoip][latitude]",        109.768 ] }
   mutate { convert      => [ "[geoip][latitude]",        "float" ] }
   mutate { replace      => [ "[geoip][longitude]",       -106.158 ] }
   mutate { convert      => [ "[geoip][longitude]",       "float" ] }
   }
}

Hallo,

  1. you should use [geo] instead of [geoip]

  2. if source.ip is a nested field you should address is as [source][ip]

  3. you should change your if structure. If the geoip filter tries to resolve a local address, it fails. In this case none of the [geo/geoip][...] fields will exists

    if [source][ip] =~ /^10.3./ {
    mutate {
    add_field => ...
    }
    } else {
    geoip {
    source => "[source][ip]"
    ...
    }
    }

Thank you for all your help. Does this look right? I'm new to this and learning as I go. I figure a lot out by trial and error (inference on error). Another dumb question. How do I know if it is nested? I'm using packetbeat for the IP and one of the categories is source.ip.

if [source][ip] =~ /^10.3./ {
   mutate { replace      => { "[geo][timezone]"      => "Eastern/New York" } }
   mutate { replace      => { "[geo][country_name]"  => "Office" } }
   mutate { replace      => { "[geo][country_code2]" => "O" } }
   mutate { replace      => { "[geo][country_code3]" => "O" } }
   mutate { remove_field => [ "[geo][location]" ] }
   mutate { add_field    => { "[geo][location]"      => "-106.158" } }
   mutate { add_field    => { "[geo][location]"      => "109.768" } }
   mutate { convert      => [ "[geo][location]",        "float" ] }
   mutate { replace      => [ "[geo][latitude]",        109.768 ] }
   mutate { convert      => [ "[geo][latitude]",        "float" ] }
   mutate { replace      => [ "[geo][longitude]",       -106.158 ] }
   mutate { convert      => [ "[geo][longitude]",       "float" ] }
}
} else {
geoip {
source => "[source][ip]"
target=>"[destination][geo]"
}
}

Looks Good but:

Replace does not create fields. But the fields do not exists until created. The geoip Filter creates this fields if successful. And you can use mutate add_field to create them.

lcer

How would I go about creating them first time around? Would I use create instead of replace? Once they are created I can change the create to replace? Not sure how to address that issue.

Well, You misunderstand elasticsearch.
You do not create fields for the elasticsearch „table“. You have to create fields in each document. You can store very small documents containing only a few fields in the same elasticsearch index together with documents containing hundreds of fields. But if a field is not inside the document you have to create it ( e.g. Mutate add_field ) before you can use it.

lcer

I'm trying to understand elasticsearch a little bit every day. Is there a way to see which fields are already created? Is there a better way to create the fields before I implement the Logstash filter?

Thank you for helping me understand.

Hallo

elasticsearch stores documents.
Every event in Logstash is one document.
A document may have several fields - elasticsearch does not need to know which fields. You can create any field inside a logstash event - elasticsearch will store it. If every event should have a field called „test_data“ you have to create it in every event seperately. You cannot create it „one time for all documents“ - not in logstash - not in elasticsearch.

lcer

If I wanted to add another different office would I add another if statement? Something like below.

if [source][ip] =~ /^10.3./ {
...
}
} else {
if [source][ip] =~ /^10.4./ {
...
}
} else {
geoip {
source => "[source][ip]"
target=>"[destination][geo]"
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.