Brand new to this product, I have a nginx server send logs via filebeats, to logstash, which is working. I have the filter geoip which is also working as far as i can tell except that it appears that I'm missing something that turns geoip.location into a geopoint. I've determined that the geoip information is being populated in logstash but not being mapped in elasticsearch. Here is my logstatsh config:
input {
beats { port => 5044 }
}
filter {
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => { "message" => "%{SED_NGINX_COMBINE}" }
}
geoip {
source => "clientip"
target => "geoip.location"
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Now I'm also using a custom log pattern, Combined + x-forwarded-for header here is that:
SED_NGINX_COMBINE %{IPORHOST:clientip} %{HTTPDUSER:ident} %{HTTPDUSER:auth} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QUOTEDSTRING:xforwardedfor_header}
I've been reading that: "the default Elasticsearch template provided with the elasticsearch output maps the [geoip][location] field to an Elasticsearch geo_point." Though I'm not sure how to test for this or how to actually do it.
While troubleshooting I came across this command "curl -XGET localhost:9200/nginx-*/_mapping " to see the mapping layout and in doing so notice that the geo_point type wasn't listed in the mapping. So the geoip information is being populated in logstash but not being mapped in elasticsearch. Just don't know how to do this. Not very familiar with json.
Thanks