Getting errors in logstash log "Lumberjack input pipeline is blocked"

Hi,

I'm getting the below error in my logstash logs

message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}
message=>"CircuitBreaker::rescuing exceptions", :name=>"Lumberjack input", :exception=>LogStash::SizedQueueTimeout::TimeoutError, :level=>:warn}
message=>"Lumberjack input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::CircuitBreaker::HalfOpenBreaker, :level=>:warn}

My setup is

Logstash forwarder ---> Logstash server ----> Elasticsearch ---->Kibana

Logstash server version is logstash-2.2.0-1.noarch
Elasticsearch version is elasticsearch-2.2.0-1.noarch

Logstash and elasticsearch running on same server

Please help me to fix this issue

Thanks.

Unless you're running the most recent version of Filebeat, try upgrading it.

Hi,

Thanks for your reply.

I am using centos 5. So cant able to install Filebeat there

Sorry, I misread. For some reason I thought you were using Filebeat. Either way, why wouldn't you be able to install Filebeat on CentOS 5?

Filebeat will support from centos 6 right?. Im getting kernel too old message while start the filebeat in centos 5.

And one more thing, I have installed ELK in Centos 6.7 with 7gb RAM, 2cpu, 730 G disk.

Thanks.

Even i am getting the pipeline error for beats input.

message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}

Are any events getting through? Are there any error messages from the elasticsearch output (which I assume you're using)? The error messages you've been quoting so far are just symptoms of the actual problem, namely that the output(s) are blocked. That's what you should get to the bottom with.

I'd be surprised if you couldn't build Filebeat for CentOS 5.

In elasticsearch logs getting this error

WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by a user handler while handling an exception event ([id: 0x8244e76d, /127.0.0.1:53071 => /127.0.0.1:9200] EXCEPTION: java.lang.OutOfMemoryError: Java heap space)
java.lang.OutOfMemoryError: Java heap space

but still im having 3gb free memory

Now there is no errors in elasticsearch logs.

But getting these errors in logstash.log

message=>"retrying failed action with response code: 503", :level=>:warn}
message=>"Lumberjack input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}

Please help to fix it.

but still im having 3gb free memory

You might have free memory on the machine but the JVM is out of heap. Increase the heap (but not to more than 50% of RAM) or decrease the amount of data you store on that node.

message=>"retrying failed action with response code: 503", :level=>:warn}

It looks like ES is still having issues. There should be clues in the logs.

Hi,

Increased the heap with below

export ES_HEAP_SIZE=3g

and restarted the elasticsearch service, after that i am getting this in elasticsearch logs

Failed to execute phase [query_fetch], all shards failed; shardFailures {[ZPZdeLp3TFWL9xIC3s6EXg][.kibana][0]: RemoteTransportException[[node1][localhost/127.0.0.1:9300][indices:data/read/search[phase/query+fetch]]]; nested: OutOfMemoryError[unable to create new native thread]; }

How many shards do you have on this server?

default value 5. and node 1.

per day index size is around 19g.

Yes, but what's the total number of shards? For a single-node cluster having five shards per index is way too much.

160 shards + 1 Replica ==> so Totally 320 Shards

ES won't allocate replica shards on the same machine as the primaries so your statement is ambiguous and I won't ask the same question three times. Anyway, it seems you've reached the limit of your machine. 3 GB JVM doesn't appear to be enough for 19 GB/day and hundreds of shards. I strongly recommend that you reduce the number of shards. Over and out.

++ to Magnus's comments, you need more memory.

Hi,

I am getting this error in kibana log

"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 1500ms"}

Read the message carefully. It's not an error message.

Hi,

Due to this "Request Timeout after 30000ms"

Getting "Discover: Gateway Timeout" in kibana. I cant view the discover logs in kibana