Nonstop Lumberjack pipeline is blocked messages

I recently upgraded Logstash from 1.4 to 1.5.1 and now I am constantly seeing the following in my logstash log:

{:timestamp=>"2015-07-09T15:10:03.559000+0000", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
{:timestamp=>"2015-07-09T15:10:03.761000+0000", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
{:timestamp=>"2015-07-09T15:10:03.761000+0000", :message=>"CircuitBreaker::Open", :name=>"Lumberjack input", :level=>:warn}
{:timestamp=>"2015-07-09T15:10:03.761000+0000", :message=>"Exception in lumberjack input thread", :exception=>#<LogStash::CircuitBreaker::OpenBreaker: for Lumberjack input>, :level=>:error}
{:timestamp=>"2015-07-09T15:10:04.059000+0000", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}

Does this just mean I am receiving data faster than I can write it out to Elasticsearch? The service is staying up but I am not sure what this message means.

1 Like

+1 seeing this same issue. Did you find what was causing it?

My log is just full of this spam. Running LS 1.5.2

{:timestamp=>"2015-07-13T01:14:15.154000+0000", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
{:timestamp=>"2015-07-13T01:14:15.155000+0000", :message=>"CircuitBreaker::Open", :name=>"Lumberjack input", :level=>:warn}
{:timestamp=>"2015-07-13T01:14:15.157000+0000", :message=>"Exception in lumberjack input thread", :exception=>#<LogStash::CircuitBreaker::OpenBreaker: for Lumberjack input>, :level=>:error}
{:timestamp=>"2015-07-13T01:14:15.454000+0000", :message=>"CircuitBreaker::Open", :name=>"Lumberjack input", :level=>:warn}
{:timestamp=>"2015-07-13T01:14:15.455000+0000", :message=>"Exception in lumberjack input thread", :exception=>#<LogStash::CircuitBreaker::OpenBreaker: for Lumberjack input>, :level=>:error}
{:timestamp=>"2015-07-13T01:15:02.262000+0000", :message=>"CircuitBreaker::Close", :name=>"Lumberjack input", :level=>:warn}

Hello, i had a exactly the same problem. In my case the issue was a misconfiguration in logstash <> ES. ES was configured to listen on interface IP, but logstash try to connect them on localhost. Immediately after fix this typo and restart ES and Logstash everything is OK.
Hopefully, this will help you also as a hint

Did any events make it into Elasticsearch? Events (lots of them!) are being pushed into ES. I think this is a message you get when ever Logstash is trying to throttle events from the clients.

i had this happen to me last night (logstash 1.5.2)
All events stopped going into elasticsearch. Logstash looked like it was running, but it wasn't doing anything.
I had nothing in my logstash.err, but had a bunch of these in my logstash.log servers.
{:timestamp=>"2015-07-22T00:02:53.274000+0000", :message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}

I ended up having to restart logstash to get it all flowing again.

For me the events didn't go into the elasticsearch. This happened when I put another elasticsearch block in the output plugin. I commented it out and the behavior was back to normal.

this constantly happens to me, events stop going into elasticsearch when this happens. All i can do is restart logstash to make it work again. Verified that this is happening on 1.5.4.
I have a 24 cpu thread logstash server and 16 workers. Also to note when this happens, the box is idle not using any resources.

I am experiencing the exact same behaviour, crawled all the forums out there but couldn't find a fix.

do we have a fix at all? thanks!

happy to make code changes.

I was able to resolve this by removing the multiline filter.

So far its holding up with some additional hardware but I presume it's just a matter of time to clog the pipeline.

I am considering topology changes,

currently the flow is similar to:
shipper(s) lumberjack + tcp -> lohstash indexer (x2 auto scaled upto 4) -> Elasticsearch data nodes X2

considering
shipper(s) lumberjack + tcp -> logstash -> Redis -> Logstash (all filtering happens here) -> elasticsearch

this adds an additional layer but hope it will be fast enough to accept all incoming requests.

All in all the Ideal would be to get the back pressure working across all inputs consistently.

Me too was getting lot of this error in my logstash.log by changing the lumberjack ssl port I resolve the issue now my logs are coming nicely

I'm also experiencing this but figured it was simply poorly configured spool_size and underpowered vms.