Pipeline is full, she can't take much more captain!


(PJ Champion) #1

Everything was going fine until I doubled the amount of log forwarders. I went from around 5,000/minute to 5/minute (not kidding). I've since cut the amount of log forwarders down to the original amount however I can't seem to get the performance back.

Logstash.log:
{:timestamp=>"2016-11-29T13:56:27.261000-0500", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-11-29T13:56:27.761000-0500", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-11-29T13:56:28.262000-0500", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}

Logstash: /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
congestion_threshold => "40"
}
}

I maybe gettting the above confused with , Possibly I'm changing my congestion settings in the wrong place:
/opt/logstash/bin/logstash.conf
Input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Stats on the box look fine (CPU / Memory)

plugin:elasticsearch Request Timeout after 1500ms will also display when kibana is starting from time to time.


(PJ Champion) #2

Hey PJ, Let me help you with this, what's the status of your elasticsearch service?

Funny you should ask it says out of memory, however I cant see to change the heap space.

Here I found a snippet that might help, seems the spot to change memory has changed.

*Marked as self resolved.

PJ is a smart awesome guy who needs to pay more attention to Java, like always.


(Andrew Kroh) #3

@gchampion Glad you found the issue. As you add more nodes you might want to walk through the LS performance troubleshooting checklist to help optimize Logstash.

What version of LS are you running? congestion_threshold is being deprecated in LS 5.0 because it's not needed as the protocol between LS and Beats handles the back-pressure.


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.