Geo_point automatically

Hello,

I struggle to add the geo_point to my index, automatically.

All the solutions I found, official and unofficial, indicate that it is necessary to pass by the devTools to make a PUT of the index. But I use elasticSearch on a docker, and I would like geoip.coordinates to be of type "geo_point" automatically.

We can't do ' convert => [ "[geoip][coordinates]", "geo_point"] ' because it is not supported.

Is there another way to convert?

logstash.conf file

    input {
        beats {
            port => 5000
            host => "0.0.0.0"
        }
    }
    filter {
        grok {
            match => [ "message", "\[%{IP:server_ip}\]\[%{IP:client_ip}\]  - %{NUMBER:size} %{NUMBER:duration} ms"]

        }
        geoip {
            source => "client_ip"
            target => "geoip"
            database => "/usr/share/logstash/config/GeoLite2City.mmdb"
            add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
            add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
        mutate {
            convert => {
                "duration" => "float"
                "size" => "float"
            }
        }
        mutate {
            convert => [ "[geoip][coordinates]", "float"]
        }
    }

    output {
        elasticsearch {
            hosts => ["172.10.0.2:3600"]
            index => "datelogs-%{+YYYY.MM.dd}"
            # template_name => "logs-*" # dont know if its necessary
        }
        stdout { codec => rubydebug }
    }

The concept of geo_point does not exist in logstash. The only way to tell elasticsearch that a field is a geo_point is to use a template.

Thanks for the answer!
When I create the index, geoip.coordinates is already a number and I can't change it.
I already have data.

How to change it?

You cannot change the type of a field once it has been indexed. You would need to create a new index (that has a template). One option for doing that is the reindex API.

After much research, I discovered several points.

First of all here is my final version of logstash.conf

input {
    beats {
        port => 5000
        host => "0.0.0.0"
    }
}
filter {
    grok {
        match => [ "message", "\[%{IP:server_ip}\]\[%{IP:client_ip}\]  - %{NUMBER:size} %{NUMBER:duration} ms"]
    }
    geoip {
        source => "client_ip"
    }
    mutate {
        convert => {
            # even if it is a NUMBER above, the final type will be string, mutate allows to make a float
            "duration" => "float"
            "size" => "float"
        }
    }
}
output {
    elasticsearch {
        hosts => ["172.10.0.2:3600"]
        index => "logstash-%{+YYYY.MM.dd}" #important
    }
    stdout { codec => rubydebug }
}

Now the list of points :

  • For info I use docker with elasticsearh, kibana, logstash, filebeat on it

  • When the docker starts, it creates a logstash template. You can see it by going to the devTools of Kibana : GET /_template/logstash
    image
    in this template, we can see the geo_point needed for our maps.
    So to link this template to our log files, our indexes must have the name of the template, here it is "logstash".
    To give this name, go to logstash.conf, output, elasticsearch, index.

  • In Kibana, index patterns, the index name should be "logstash-*" to encompass our log files which are now called "logstash-{DATE}"
    image

  • Be careful that docker containers do not keep old configurations (indexes, Kibana mappings for example)

  • mutate on geoip is useless in logstash.conf, don't write it

mutate {
    convert => [ "[geoip][location]", "float"] # useless!
}

I hope it helps someone.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.