Logstash pipeline to pipeline losing events when queue is full

Hello,

I have configured pipeline-to-pipeline with output isolator configuration in Logstash version 7.4.1 to send logs to local and remote Elasticsearch. Both pipelines, local and remote output, have been configured with persistent queues.

The configuration works fine. If the connection to remote fails, the events are being queued and the local events are being sent to local elasticsearch but the problem is when the remote queue is full, Logstash starts droping the events on both pipelines.

Here is an example of the pipelines configuration

pipeline esLocal

    input {
    pipeline {
        address => "esLocal" 
    }
    }
    filter {
    }
    output {
    elasticsearch {
    		hosts => ["localhost:9200"]
    		index => "%{[@metadata][indexName]}-%{+yyyy.ww}"
        user => "elastic"
        password => "xxxxxxxxx"
    	}
    }

pipeline esRemote

    input {
        pipeline {
            address => "esRemote" 
        }
    }
    filter {
    }
    output {

    	elasticsearch {
    		hosts => ["192.168.100.100:9200"]
    		index => "%{[@metadata][indexName]}-%{+yyyy.ww}"
            user => "elastic"
            password => "xxxxxxxxx"
    	}
    }

input pipeline

    input{

    }
    output {
    	pipeline { send_to => [esLocal,esRemote] }
    }

I have also test sending the output like this:

    output {
    	pipeline { send_to => esLocal }
        pipeline { send_to => esRemote}
    }

but still experiencing the same problem.

Should i have to use the clone filter on the esRemote to prevent the events being dropped on the local pipeline or any other suggestion to solve this problem?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.