How do you make geo_points in Elasticsearch 6.x?

Hello, so I think I so far have a pretty good basic foundation for my central logging project set up.

I have a number of hosts running filebeat, shipping syslogs, apache2 logs, and the apt history log to one of two identical logstash servers, which then filter the data and forward it on to a three-node Elasticsearch cluster. (My configurations are below.)

I'm a bit baffled as to why the logstash geoip plugin doesn't create the geo_point type automatically with the rest of the geoip information however.

In Kibana, attempting to create a map results in the dread message

No Compatible Fields: The "filebeat-*" index pattern does not contain any of the following field types: geo_point

From what I've read, it seems that the way to resolve this problem on the 6.x ELK stack is to make a new index template for filebeat. But when I

curl -XGET 'my_elasticsearch_node:9200/_template/*?pretty'

The only template I see by default is for Kibana. Which is confusing because everything I've read so far (mostly for much older versions of ELK, granted) implies it should have an existing filebeat template in order to index filebeat stuff, especially given that my logstash servers are set to

manage_template => false

in its output clause. So I've read the general index template doc at https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html#indices-templates-exists and the Digital Ocean guide at https://www.digitalocean.com/community/tutorials/how-to-map-user-location-with-geoip-and-elk-elasticsearch-logstash-and-kibana but I don't really understand how to apply them in this case.

How do you conceptualize how these moving parts work together? Could someone please explain that and walk me through how to get geo_points working?

filebeat config
/etc/filebeat/filebeat.yml

filebeat.prospectors:
- input_type: log
paths:

  • /var/log/*.log
    fields:
    type: syslog
    - input_type: log
    paths:
  • /var/log/apache2/*.log
    fields:
    type: apache2
    - input_type: log
    paths:
  • /var/log/apt/history.log
    fields:
    type: apt
    multiline.pattern: Start-Date
    multiline.negate: true
    multiline.match: after
    multiline.flush_pattern: End-Date

output.logstash:
hosts: ["logstash1_ip:5044", "logstash2_ip:5044"]
ssl.enabled: true
ssl.supported_protocols: [TLSv1.2]
ssl.certificate_authorities: ["/etc/filebeat/ca_keys/ca-chaincert.pem"]
ssl.certificate: "/etc/filebeat/ca_keys/hostcert.pem"
ssl.key: "/etc/filebeat/ca_keys/hostkey.pem"

logstash config
/etc/logstash/conf.d/logstash.conf

input {
beats {
port => 5044
ssl => true
ssl_certificate_authorities => ["/etc/logstash/ca_keys/ca-chaincert.pem"]
ssl_certificate => "/etc/logstash/ca_keys/logstash1cert.pem"
ssl_key => "/etc/logstash/ca_keys/logstash1key.pem"
ssl_verify_mode => "force_peer"
client_inactivity_timeout => 180
}
}

filter {
if [fields][type] == "syslog" {
grok {
match => { message => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:msg}"
}
}
date {
match => [ "timestamp" , "MMM dd HH:mm:ss" ]
}
}
else if [fields][type] == "apache2" {
grok {
match => { message => "%{COMBINEDAPACHELOG}"
}
}
geoip {
source => "clientip"
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
else if [fields][type] == "apt" {
grok {
match => { message => "Start-Date: %{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR}:%{MINUTE}:%{SECOND} Commandline: %{GREEDYDATA:commandline} (Upgrade: %{GREEDYDATA:upgrade})?(Install: %{GREEDYDATA:install})? End-Date: %{GREEDYDATA:junk}"
}
}
mutate {
add_field => {"timestamp" => "%{year}-%{month}-%{day} %{time}"}
remove_field => ['junk']
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
}
}
}

output {
elasticsearch {
hosts => ["es_node1_ip:9200", "es_node2_ip:9200", "es_node3_ip:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

https://www.elastic.co/blog/geoip-in-the-elastic-stack runs through how to get things working, as well as some commonly seen errors.

However it doesn't look like you have a geoip filter defined anywhere?

(geoip is actually in my logstash filter)

Thanks for your response! your blog post helped a lot :slight_smile:
This one was also extremely helpful to get a better handle on how mappings work
https://www.elastic.co/blog/logstash_lesson_elasticsearch_mapping

Could I suggest that some of the salient excerpts make it into the documentation site where they'd be easier to find?

So the exact steps I took to get this working (for the benefit of anyone else that makes it to this topic) were

  1. Copy this file to the root of one of your elasticsearch nodes - click on raw so it's easier to copy/paste
    https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/master/lib/logstash/outputs/elasticsearch/elasticsearch-template-es6x.json

  2. I had to edit the top line which reads

"template" : "logstash-*",

to read

"template" : "filebeat-*",

  1. Now apply this mapping (note that the -H flag is omitted in the blogs and docs I saw but needs to be there or you get a 406 error complaining about the Content-Type)

curl -XPUT http://elasticsearch_node1:9200/_template/filebeat_template?pretty -H 'Content-Type: application/json' -d @elasticsearch-template-es6x.json

  1. Blow away your existing data with

curl -XDELETE 'elasticsearch_node1:9200/_all?pretty'

  1. You're done! The new indices flowing into Elasticsearch will include a geo_point

Yeah I somehow missed that :frowning:

Did you see Configure Elasticsearch index template loading | Filebeat Reference [8.11] | Elastic? I am wondering if it covers this?

I wouldn't do that, it removes everything, including any dashboards you may have built. Try to target specific indices or patterns.

Thanks for the heads up! Will keep that in mind.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.