Hi, I'm new to the ELK Stack and I've been assigned the simple task to use this technology to store and analyze our logs.
Quick intro: we have a folder containing logs, one for each microservice. A new log is produced everyday with a date in the filename (e.g. ms0_21052018.log).
These logs are similar but not identical, so I opted for n pipelines, one for each microservice.
Each logstash config. file reads from Filebeat, apply some filters and outputs to the console (this will change to Elasticsearch).
I run logstash with "bin/logstash" which reads the pipelines.yml and starts everything I put in there. However, am I correct if I say that each line of each log gets pushed to every pipeline I have?
I know I can check the input, filtering by the "source" field, but I still don't like the idea that all new lines get forwarded to all pipelines.
Question: is there a way to selectively push Beat's new log lines to their corresponding pipelines, rather than checking the input in the pipeline config file?