Everything was going fine until I doubled the amount of log forwarders. I went from around 5,000/minute to 5/minute (not kidding). I've since cut the amount of log forwarders down to the original amount however I can't seem to get the performance back.
Logstash.log:
{:timestamp=>"2016-11-29T13:56:27.261000-0500", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-11-29T13:56:27.761000-0500", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-11-29T13:56:28.262000-0500", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
Logstash: /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
congestion_threshold => "40"
}
}
I maybe gettting the above confused with , Possibly I'm changing my congestion settings in the wrong place:
/opt/logstash/bin/logstash.conf
Input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Stats on the box look fine (CPU / Memory)
plugin:elasticsearch Request Timeout after 1500ms will also display when kibana is starting from time to time.