Error in shipping logs from Logstash to Elasticsearch

Error while forwarding filebeat indexed logs from Logstash to Elasticsearch. Following is the content ¨Beats input: the pipeline is blocked, temporary refusing new connection.¨

You've not really given us much to help with. How about providing some proper information?

What LS/ES/FB version?
What does your LS config look like?

I am using Logstash 2.1, Filebeat 1.0.1.
Following is the output file configuration of Logstash
output {
elasticsearch {
hosts => ["172.27.59.93:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Also, I tried adding protocol => http, but on configtest, it gives me an error

What's the error.

Please, provide more information like the full errors you see, otherwise it will end up us asking a million questions to get the information.

{:timestamp=>"2016-02-10T11:05:04.274000+0530", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Beats input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
{:timestamp=>"2016-02-10T11:05:04.275000+0530", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2016-02-10T11:05:04.389000+0530", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}

any help in this problem ??

Does increasing the memory of the logstash host and its java heap help alleviate the issue? Not much to go on based on the error. Something is causing the pipeline to stall/slow down. Could be the logstash host is undersize. Is any output reaching the elasticsearch server? How busy is the elasticsearch host? is it keeping up with LS? Can you provide more info on your setup?