I have recently a full Beats 6 + ELK 6 stack : Filebeat + Metricbeat agents on 2 servers, ELK on another one.
I've configured both Metricbeat and Filebeat the same way to send data to Logstash over TLS, and disabled Elasticsearch output. For both of them, I loaded ES templates and Kibana dashboards.
Metricbeat data is correctly exported to Logstash: I can see all fields in MB logs and in Logstash with Ruby debug output.
On Filebeat side, with Apache2 module enabled, messages are properly transmitted, but fields are not exported: only plain message (log line) is received by Logstash. Therefore, I need to manually parse messages with grok.
Is that the good way to do? Did I miss something? Should I prefer ES output over Logstash for this use-case?
The way modules works in filebeat is most of them are actually using an ingest pipeline on elasticsearch to do the extraction and transformation of the original data, so when you send you events to Logstash the ES output doesn't send them to the ingest pipeline and you only get your raw data.
There is a few solution for that, depending on your need and architecture.
Use Filebeat directly that will send the data to the correct ingest pipeline.
Push the ingest pipeline to elasticsearch, Use conditionals in the LS config to route the events to a elasticsearch output with the configured pipelineoption
Convert the ingest pipeline to a logstash pipeline manually with the help of a logstash tool (this is similar to what you did)
It really depends on your usecase, all of the above solution are correct.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.