Hello everyone.
We are currently running a single Logstash instance which handles log collection from a number of different applications. There is approximately 10 pipelines, each with different filters for parsing the logs. The pipelines listen for beats, tcp and udp input.
We have had some unfortunate experiences where a faulty written filter in a single pipeline causes the entire Logstash instance to shut down. This behavior is of course something we do not want.
But what is the alternative? We are not sure it is beneficial to run 10 separate Logstash instances.
What is best-practice in this regard? Do we have a bad pattern since we are running this many pipelines in a single instance? Should we just do a better job of testing our pipelines locally before deploying to production?