Geoip create everything but the geoip.location

Good morning everyone,
today I'm trying to implement the geoip filter, using this input file :

    filter {
        if [host] =~ /10\.0\.0\.1/ {

            grok {
                    match => [ "message", "\A%{SYSLOG5424PRI}%{SYSLOGTIMESTAMP}( )%{WORD}(: )(?:[0-9])?,(?:[0-9]*,)?(?:[0-9]*,)?(?:[0-9]*,)?%{WORD:interface},%{WORD},%{WORD:action},%{WORD:direction},(?:[0-9]*,)(?:[0-9a-z]*,)?(?:[0-9]*,)?(?:[0-9]*,)?(?:[0-9]*,)?(?:[0-9]*,)?%{WORD},(?:[0-9]*,)?%{WORD:transport},(?:[0-9]*,)?%{IP:ipSource},%{IP:ipDestination},%{GREEDYDATA}"]

            geoip {
                    add_tag => [ "GeoIP" ]
                    source => "ipSource"


If I got many of the Geoip fields (country name, latitude, longitude, etc.) the only one missing field is the geoip.location as you can see in this screenshot :

Any idea about this problem ?

Because it's an RFC1918 address -

I don't think so that is a private address ! Anyway if I got the latitude and longitude of this address, why geoip can't build a geoip.location with it ?

It is there, but has two subfields - and geoip.location.lon.

Oh, I was looking at the if [host] =~ /10\.0\.0\.1/.

Didn't even look at the picture sorry.

No because of a few days ago, on the same server, this plugin generated the geoip.location, first with a wrong type but next like a charm.
Why is the hell Logstash doing the things differently now? I didn't change the line of my grok, I only deleted every indexes to start from a fresh one.

Are you having Logstash manage the index templates? Have you got an index template that apply the correct mappings for this field for your pfsense index (Logstash by default assumes indices matching logstash-* in the default index template)?

I think you have found something! Before today, I had only one index (logstash-*) but now I used IP to conditionally split into different indexes. So, for my Pfsense ip server, I got this output :

 "location": {
                "properties": {
                  "lat": {
                    "type": "float"
                  "lon": {
                    "type": "float"

That is indeed not correct. You can copy the default Logstash template, change the index pattern it matches and then upload it using a different name. You will need to reindex the data though.

Ok that sounds good, but I'm unable to make a copy of the logstash template, I'm a newbie :confounded:
I read many posts, but that seems to have changed (that never work)

Could you help me to do that operation, with the latest version of ELK ?

The template can be found here. Download this and change "template" : "logstash-*", to match your index name. Then upload this index template with a suitable name. It should then apply to all new indices created that match the specified pattern.

1 Like

Ok, thank you very much, with your help I have been able to do the job!
As a beginner with ALK, i found that's very difficult to create and update indexes and data type. Copy/edit/paste using curl commands, then delete indexes, create a new one, etc. It's not really natural and easy, especially when the syntax evolved so forums threads can't help you.

Have you think about simplifying this process ?

Anyway, this project is awesome and you too !

   if [host] =~ /10\.0\.0\.1/ {

Using a plain string comparison will be faster and won't incorrectly match addresses like

Thanks for this tip!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.