Hi all,
I am right now planning a completely new monitoring and logging environment.
One part of this belongs to the beats, especially filebeat. I kindly want to know, what the best practices are here right now (ELK 6.2). I would like to use the filebeat modules as far as I can, to avoid using logstash. I made the experience in the past, that I can either use elasticsearch or logstash as output in beats. my goal is: Filebeat modules wherever I can -> Elastic. Logstash only there, where there is no filebeat available (e.g. syslog of network devices). But what if want to log an application, where there is no module available? As there is no logstash output, how would I "grok" the logs?
You can do that using an Elasticsearch Ingest Node pipeline.
Is it possible to use the ingest nodes for everything (also stupid syslog or snmp devices) to have a totally no need for logstash anymore? What I have seen so far, was a logstash input for port 514 (syslog) etc. I understand it right, that the ingest pipeline config will only be visible in the elastic stack and not like filter configs saved in files like logstash? Separating the different pipelines would be makeable by setting the pipeline in the filebeat prospector config, like the docs say.
Ingest nodes are less capable than Logstash, so are not a good fit for everything. Read this blog post for a comparison.
Based on the blogpost, there is no added value to use ingest instead of logstash. Rather I see here many reasons to not use ingest nodes and to go totally on logstash, what in turn means there is no need to use modules and just define prospectors, send them to logstash and let grok do the magic.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.