Filebeat To Logstash caused circuit break errors

First overview of the environment showing the flow of logs
Filebeat -> logstash(ingest) -> Redis(Queue Messaging) -> Logstash Filter Nodes (3) -> Elastic Search...

We currently have syslog sending logs and it showing up in Kibana without issue.
However we are using filebeat 5.0 or 1.3.1 to send logs to logstash 2.3.4

Once filebeat start sending logs to the logstash ingest, our grafana graph shows the high ingest but the filter does not increase with the high ingest. Moments later logstash writes this:
"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::OpenBreaker, :level=>:warn}...

As far as we can we see, the logs are never making it to Redis

Below are the pipelines for ingest and filter. Your assistance is greatly appreciated.

logstash(ingest) pipeline
input {
beats {
port => "5043"
tags => ["fglam"]
type => "fglam-beats"
}
}
output {
if([type] == "fglam-beats")
{
redis {
data_type => "list"
key => "logstash-fglam-beats"
congestion_threshold => "2200000"
}
}
}

filternode pipeline
input {
redis {
type => "fglam-beats"
data_type => "list"
key => "logstash-fglam-beats"
#key => "filebeat"
port => "6379"
host => "ist000216"
add_field => { "index_name" => "fglam" }
}
}
filter {
if("fglam-tags" in [tags])
{
environment {
add_metadata_from_env => {
"hostname" => "HOSTNAME"
}
}
mutate {
add_tag => [ "%{[@metadata][hostname]}" ]
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:my_timestamp}\s+%{SEV:severity}\s+%{CAT1:cat1}\s+%{CAT2:cat2}%{BODY:body}" }
}
}
}

As far as we can we see, the logs are never making it to Redis

None of them? Is there anything else in the Logstash log? What if you crank up the log level?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.