We are trying to write data parallelly to 2 different clusters let us say, cluster A and cluster B from Logstash. In the sample Logstash configuration is given below.
When the cluster B is down or not reachable the ingestion towards the cluster A get stopped and data loss observed. I want to send the data to cluster A even if the other is down. Is there any exception handling mechanisms to achieve this?
No, Logstash can not queue up data for one output while continuing to send it to another within a single pipeine as that could lead to data loss. You probably need to use multiple pipelines with persistent queues to get the behaviour you describe and that will still only allow queueing data up to a point before applyng backpressure.
I do not have an example but you will need to define 3 pipelines in pipelines.yml and use pipeline to pipeline communication to send data between then. The first pipeline will handle input and processing while the ache of the other will handle output to one cluster. For these two you will need to configure a persistent queue each in order to allow buffering and delay backpresdute that stops all processing.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.