I'm currently evaluating using the elastic stack (6.1.2) and trying to get going with an initial basic config. I see that filebeat comes with some field mapping defined for the core modules (system/apache etc) and as far as I can tell, if I configure filebeat to send logs to elasticsearch directly then those records will be inserted with the useful predefined mappings.
However, i'd like the power of being able to manipulate other logs with logstash and it seems like if I put logstash in the middle then I don't get the same power - and I can't see a way of achieving the same without manually adding all the grok patterns myself? Is there a better way without manually defining them all myself?
My logstash config is currently exactly as mentioned in the docs:
input {
beats {
port => 5044
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}