Multiple Elasticsearch output errors

Hi all! I am trying to setup my Logstash instance to output to two separate Elasticsearch clusters (Logstash version: 2.3.1). Logstash indexes to both Elasticsearch clusters just fine in till one cluster becomes unavailable. When that happens, Logstash stops indexing into BOTH clusters. Instead, I want Logstash to keep indexing into the healthy cluster while this happens.

My Logstash config:

input { kafka { zk_connect => "<ZK_hosts>" group_id => "region1" topic_id => "foo-topic" consumer_timeout_ms => 15000 add_field => { 'logstash-key' => 'foo-topic' 'logstash-region' => 'region1'} fetch_message_max_bytes => 500000000 } kafka { zk_connect => "<ZK_hosts>" group_id => "region2" topic_id => "foo-topic" consumer_timeout_ms => 15000 add_field => { 'logstash-key' => 'foo-topic' 'logstash-region' => 'region2'} fetch_message_max_bytes => 500000000 } } output { if [logstash-key] == 'foo-topic' { if [logstash-region] == "region1" { elasticsearch { hosts => "<region_1_IP>" index => "foo-topic-%{+YYYY.MM.dd}" } } else if [logstash-region] == "region2" { elasticsearch { hosts => "<region_2_IP>" index => "foo-topic-%{+YYYY.MM.dd}" } } } }

As you can see, the goal here is to consume from one kafka topic using different group_ids; this way, if an Elasticsearch cluster goes down, Logstash will still index to the other, and when it returns, Logstash will pick up where it left off via that region's group_id. But this fails because when an Elasticsearch region dies Logstash won't index to either region :frowning:. Seems to me like either I misconfigured this wrong, or this is a bug.

This is expected. A single stalled output will stall the entire pipeline.