I'm experiencing problems with beats input in Logstash getting the following error:
{:timestamp=>"2016-04-15T10:18:32.126000+0200", :message=>"CircuitBreaker::Open", :name=>"Beats input", :level=>:warn}
{:timestamp=>"2016-04-15T10:18:32.127000+0200", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::OpenBreaker, :level=>:warn}
and after lots of errors, like the following, appear:
{:timestamp=>"2016-04-15T10:18:33.628000+0200", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
I'm using filebeat: 1.2.1, logstash 2.3.1, logstash-input-beats 2.2.7 and Elasticsearch 2.0.0. Logstash config file:
I have seen the question opened many times and I have tried what is suggested: changing the congestion_threshold, the number of workers, .. Apparently the problem came from the grok and I have also tried to split into severals but still does not work.
Any suggestions/ideas? Is this problem planned to be solved in the future release?
I was mistaken, I though it could be the grok because I was sending less data. I have just tried to send a file with more than 8000 events and since the first hundreds of events I'm getting the same error.
Changing the output from elasticsearch to a file seems to work. What could be the problem in ES? Should I change something in the configuration?
I'm starting to use Marvel but I couldn't see anything weird going on. For the node, the percentage of cpu and jvm are really slow (less than 7%). For the index, at the moment of having the errors, I didn't see anything strange but I'm not sure what should be the normal behaviour expected.
Also, I have tried with the current version of each package (2.3 for ES and LS, Kibana 4.5, filebeat 2.1) but still doesn't work. However, using the 5-alpha seems to work taking out the date filter in Logstash. Could it be the problem?
Also, shipping directly all the file from filebeat to ES, no errors and all of them are collected in ES. So I'm starting to be lost... how is it possible to determine where the problem is? How to distinguish whether is logstash or elasticsearch is the culprit, in this case?
Unfortunately no. I'm trying to monitor the cpu usage of filebeat enabling the cpuprofile when launching and using also Marvel. I have also seen that there are some properties on filebeat that can be set to increment the performance: bulk_max_size, max_rentries, timeout but still i didn't find a good combination of all parameters.
Are you having the same issue?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.