Is any exception handling mechanism in Logstash when dealing with multiple Elasticsearch in the output section to overcome the Elasticsearch failures?

We are trying to write data parallelly to 2 different clusters let us say, cluster A and cluster B from Logstash. In the sample Logstash configuration is given below.

input {
        elasticsearch{
                hosts => [“cluster C”]
                index => "index_name"
        }
}
output {
        elasticsearch {
                hosts => ["cluster A"]
                index => "index_name_clusterA"
        }
        elasticsearch {
                hosts => ["cluster B"]
                index => "index_name_clusterB"
        }
} 

When the cluster B is down or not reachable the ingestion towards the cluster A get stopped and data loss observed. I want to send the data to cluster A even if the other is down. Is there any exception handling mechanisms to achieve this?

No, Logstash can not queue up data for one output while continuing to send it to another within a single pipeine as that could lead to data loss. You probably need to use multiple pipelines with persistent queues to get the behaviour you describe and that will still only allow queueing data up to a point before applyng backpressure.

Can you please provide a sample pipeline.yml file for which we can send data parallel

I do not have an example but you will need to define 3 pipelines in pipelines.yml and use pipeline to pipeline communication to send data between then. The first pipeline will handle input and processing while the ache of the other will handle output to one cluster. For these two you will need to configure a persistent queue each in order to allow buffering and delay backpresdute that stops all processing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.