Hello, so I think I so far have a pretty good basic foundation for my central logging project set up.
I have a number of hosts running filebeat, shipping syslogs, apache2 logs, and the apt history log to one of two identical logstash servers, which then filter the data and forward it on to a three-node Elasticsearch cluster. (My configurations are below.)
I'm a bit baffled as to why the logstash geoip plugin doesn't create the geo_point type automatically with the rest of the geoip information however.
In Kibana, attempting to create a map results in the dread message
No Compatible Fields: The "filebeat-*" index pattern does not contain any of the following field types: geo_point
From what I've read, it seems that the way to resolve this problem on the 6.x ELK stack is to make a new index template for filebeat. But when I
curl -XGET 'my_elasticsearch_node:9200/_template/*?pretty'
The only template I see by default is for Kibana. Which is confusing because everything I've read so far (mostly for much older versions of ELK, granted) implies it should have an existing filebeat template in order to index filebeat stuff, especially given that my logstash servers are set to
manage_template => false
in its output clause. So I've read the general index template doc at https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html#indices-templates-exists and the Digital Ocean guide at https://www.digitalocean.com/community/tutorials/how-to-map-user-location-with-geoip-and-elk-elasticsearch-logstash-and-kibana but I don't really understand how to apply them in this case.
How do you conceptualize how these moving parts work together? Could someone please explain that and walk me through how to get geo_points working?
filebeat config
/etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- /var/log/*.log
fields:
type: syslog
- input_type: log
paths:- /var/log/apache2/*.log
fields:
type: apache2
- input_type: log
paths:- /var/log/apt/history.log
fields:
type: apt
multiline.pattern: Start-Date
multiline.negate: true
multiline.match: after
multiline.flush_pattern: End-Dateoutput.logstash:
hosts: ["logstash1_ip:5044", "logstash2_ip:5044"]
ssl.enabled: true
ssl.supported_protocols: [TLSv1.2]
ssl.certificate_authorities: ["/etc/filebeat/ca_keys/ca-chaincert.pem"]
ssl.certificate: "/etc/filebeat/ca_keys/hostcert.pem"
ssl.key: "/etc/filebeat/ca_keys/hostkey.pem"
logstash config
/etc/logstash/conf.d/logstash.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate_authorities => ["/etc/logstash/ca_keys/ca-chaincert.pem"]
ssl_certificate => "/etc/logstash/ca_keys/logstash1cert.pem"
ssl_key => "/etc/logstash/ca_keys/logstash1key.pem"
ssl_verify_mode => "force_peer"
client_inactivity_timeout => 180
}
}filter {
if [fields][type] == "syslog" {
grok {
match => { message => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:msg}"
}
}
date {
match => [ "timestamp" , "MMM dd HH:mm:ss" ]
}
}
else if [fields][type] == "apache2" {
grok {
match => { message => "%{COMBINEDAPACHELOG}"
}
}
geoip {
source => "clientip"
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
else if [fields][type] == "apt" {
grok {
match => { message => "Start-Date: %{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR}:%{MINUTE}:%{SECOND} Commandline: %{GREEDYDATA:commandline} (Upgrade: %{GREEDYDATA:upgrade})?(Install: %{GREEDYDATA:install})? End-Date: %{GREEDYDATA:junk}"
}
}
mutate {
add_field => {"timestamp" => "%{year}-%{month}-%{day} %{time}"}
remove_field => ['junk']
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
}
}
}output {
elasticsearch {
hosts => ["es_node1_ip:9200", "es_node2_ip:9200", "es_node3_ip:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}