Multiple "elasticsearch" output plugins for single logstash instance

I am trying to use one logstash instance to send data to multiple elasticsearch clusters. My lumberjack output config looks like

output {
  elasticsearch {
    host => "localhost"
    cluster => "my-es-cluster"
  }
  elasticsearch {
    cluster => "my-another-cluster"
  }
  stdout { codec => rubydebug }
}

When I write the second elasticsearch block like that, it floods my logstash logs with

{:timestamp=>"2015-07-22T00:02:53.274000+0000", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}

as described in this Logstash discussion thread. I found only this old Google group discussion that has a similar case but there is just one mention in the comments that says it should work. I checked the documentation for the output plugin but as far as I saw, there is no mention of using same output block in one output plugin conf file.

Am I missing something obvious?

When you don't specify a protocol for the elasticsearch output, it defaults to 'node', which makes the instance form part of the cluster. In your configuration you are therefore trying to connect as part of two different clusters. I would recommend trying with the http protocol for one or both outputs to see if this helps.

I did doubt something of "part of two different cluster" is taking place but wasn't sure. My bad of having completely ignored the line ...With the default protocol setting ("node"), this plugin will join your Elasticsearch cluster as a client node, so it will show up in Elasticsearch’s cluster status.... form the document page-section. Will try this and let you know. Thank you for the prompt response.

@Christian_Dahlqvist

I followed your advice and modified my lower most elasticsearch block as follows:

  elasticsearch {
    protocol => "http"
    host => "10.xx.xx.xx"
    #cluster => "elasticsearch"
  }

But now I have another error that says:

{
    : timestamp=>"2015-08-12T16:00:09.101000-0400",
    : message=>"Got error to send bulk of actions: Connection refused",
    : level=>: error
}{
    : timestamp=>"2015-08-12T16:00:09.101000-0400",
    : message=>"Failed to flush outgoing items",
    : outgoing_count=>14,
    : exception=>#<Manticore: : SocketException: Connectionrefused>,
    : backtrace=>[
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:35:in `initialize'",
        "org/jruby/RubyProc.java:271:in `call'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:61:in `call'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:225:in `call_once'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:128:in `code'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/http/manticore.rb:71:in `perform_request'",
        "org/jruby/RubyProc.java:271:in `call'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/base.rb:190:in `perform_request'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/client.rb:119:in `perform_request'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/actions/bulk.rb:80:in `bulk'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch/protocol.rb:103:in `bulk'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:505:in `submit'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:504:in `submit'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:529:in `flush'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:528:in `flush'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219:in `buffer_flush'",
        "org/jruby/RubyHash.java:1341:in `each'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216:in `buffer_flush'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112:in `buffer_initialize'",
        "org/jruby/RubyKernel.java:1511:in `loop'",
        "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:110:in `buffer_initialize'"
    ],
    : level=>: warn
}

...which I guess is some network related issue since I tried telnet on that box from my logstash box over 9200 and it says connection refused. But it is weird to see that the logstash service status says it is not running - which to my understanding is OK since it encountered an exception but when I do netstat -elntp it shows me (I'm running Logstash over 443):

Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 0 231388 10568/java

Logstash is not shutdown.