message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
message=>"CircuitBreaker::rescuing exceptions", :name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
message=>"Lumberjack input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
My setup is
Logstash forwarder ---> Logstash server ----> Elasticsearch ---->Kibana
Logstash server version is logstash-2.2.0-1.noarch
Elasticsearch version is elasticsearch-2.2.0-1.noarch
Are any events getting through? Are there any error messages from the elasticsearch output (which I assume you're using)? The error messages you've been quoting so far are just symptoms of the actual problem, namely that the output(s) are blocked. That's what you should get to the bottom with.
I'd be surprised if you couldn't build Filebeat for CentOS 5.
WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by a user handler while handling an exception event ([id: 0x8244e76d, /127.0.0.1:53071 => /127.0.0.1:9200] EXCEPTION: java.lang.OutOfMemoryError: Java heap space)
java.lang.OutOfMemoryError: Java heap space
message=>"retrying failed action with response code: 503", :level=>:warn}
message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
You might have free memory on the machine but the JVM is out of heap. Increase the heap (but not to more than 50% of RAM) or decrease the amount of data you store on that node.
message=>"retrying failed action with response code: 503", :level=>:warn}
It looks like ES is still having issues. There should be clues in the logs.
and restarted the elasticsearch service, after that i am getting this in elasticsearch logs
Failed to execute phase [query_fetch], all shards failed; shardFailures {[ZPZdeLp3TFWL9xIC3s6EXg][.kibana][0]: RemoteTransportException[[node1][localhost/127.0.0.1:9300][indices:data/read/search[phase/query+fetch]]]; nested: OutOfMemoryError[unable to create new native thread]; }
ES won't allocate replica shards on the same machine as the primaries so your statement is ambiguous and I won't ask the same question three times. Anyway, it seems you've reached the limit of your machine. 3 GB JVM doesn't appear to be enough for 19 GB/day and hundreds of shards. I strongly recommend that you reduce the number of shards. Over and out.
"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 1500ms"}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.