Logstash InsertingToQueueTakeTooLong error message

I have been using this pipeline for weeks no issue, today I log on, fire everything up and I am not getting this error message in Logstash:

{:timestamp=>"2016-05-08T07:43:12.806000+0000", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}
{:timestamp=>"2016-05-08T07:43:12.807000+0000", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}

The only way I can get it to run is to remove all the filters. There has never been an issue before. Everything is running on different EC2 instances. Any idea what would be causing this all of a sudden?

It looks like you're simply trying to stuff more data into Logstash than it can cope with. Is there any reason to suspect that this isn't just a capacity issue (that can be addressed with more horsepower and/or configuration tuning)?

It never was a problem until now, which is odd. But since posting this and doing some reading, I made the change so that the filters is done post Kafka, that way Logstash doesn't lag out.