Hey all,
I'm not seeing any results for the current time period in KIbana for Logstash. I'm seeing messages like this in the Logstash log:
{:timestamp=>"2016-05-05T10:26:39.975000-0400", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:40.336000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:40.837000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:41.344000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:41.845000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:42.359000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:42.862000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:43.366000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:43.867000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-05-05T10:26:44.369000-0400", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn
I've verified that Logstash is indeed running with ps |grep
i[root@logs:~] #ps -ef |grep logstash |grep -v grep logstash 15931 1 24 08:14 ? 00:33:18 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/var/lib/logstash -Xmx1g -Xss2048k -Djffi.boot.library.path=/opt/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/var/lib/logstash -XX:HeapDumpPath=/opt/logstash/heapdump.hprof -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log
Elasticsearch cluster health is doing ok:
#curl -uadmin:$ES_PASS localhost:9200/_cluster/health?pretty { "cluster_name" : "elasticsearch", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 21, "active_shards" : 42, "relocating_shards" : 2, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 34664, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 1711428, "active_shards_percent_as_number" : 100.0 }
With all ES nodes reporting in:
`curl -uadmin:$ES_PASS localhost:9200/_cat/nodes?v
host ip heap.percent ram.percent load node.role master name
10.10.5.98 10.10.5.98 65 90 5.27 d * JF_ES1
10.10.5.108 10.10.5.108 10 63 0.35 d m JF_ES3
10.10.5.153 10.10.5.153 41 94 1.48 d m JF_ES2
I'm just wondering what the messsages in the logs here could indicate and how I can correct the problem.
Thanks!