I have a logstash config with multiple pipelines. One of the pipeline is a central output which ships to elasticsearch. The other pipelines have their own inputs and filters and having that central output pipeline as output.
I noticed a hanging shutdown because the output pipeline was shut down before the other pipelines. My only solution was to kill the process. I don't want to lose log messages.
Is there a way to configure a pipeline as start first, stop last?
Currently I am using a memory queue. Would a persistent queue help?
Are you using pipeline-to-pipeline communications? If so, the pipeline shutdown calls should be ordered. If you are using (for example) tcp-to-tcp then pipeline shutdown order is not guaranteed.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.