I have successfully configured a multi-pipeline. Each pipeline normally loads the data.
My multi-pipeline consists of three jdbc pipeline and one filebeat entry.
This morning I was facing a problem loading data via filebeat to logstash.
Filebeat copied the files to his directory and executed the transfer to logstash.
But I do not know why, logstash did not ingest this data.
I'm have this questions
Is it possible for one pipeline to impact data load another during its execution?
How can I be reassured that logstash will process all connector inputs?
If you are only going to use a main pipeline, there is no need to use pipelines.yml as this is the default behaviour. I would however recommend splitting it up into 4 pipelines as they seem to have different combinations of inputs and outputs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.