Status changed from yellow to red - Request Timeout after 1500ms

Hi,

I am using Kibana 4.4.2 and Elasticsearch 2.2.1.

It was all working fine but now suddenly getting Kiaban is showing heap error. - " Status changed from yellow to red - Request Timeout after 1500ms".

Can anyone please suggest something.

Thanks,
Rajeev

Hi,

could you please provide a bit more of the log, to see where exactly the request timeout has happened?

Cheers,
Tim

Hi Timroes,

Thanks for replying. I am putting down the elastic search log down.
One weird thing which I observed that ELK cluster is respondings slowly. It is more time in intializing, starting and also not able to delete any index. can you please help.

Also memntion the exception - java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 1120

Logs :

[2018-03-19 13:12:52,575][INFO ][env                      ] [node-1] heap size [23.9gb], compressed ordinary object pointers [true]
[2018-03-19 13:12:56,934][INFO ][node                     ] [node-1] initialized
[2018-03-19 13:12:56,934][INFO ][node                     ] [node-1] starting ...
[2018-03-19 13:20:53,095][INFO ][transport                ] [node-1] publish_address {stciperf1/172.22.59.65:9300}, bound_addresses {[2404:f801:28:fc0d:7d36:d3ac:adb:4a25]:9300}, {172.22.59.65:9300}, {[2404:f801:28:fc0d:3d65:7c19:16b1:4934]:9300}, {[fe80::3d65:7c19:16b1:4934]:9300}
[2018-03-19 13:20:53,095][INFO ][discovery                ] [node-1] stciperfcluster/iEWaSX9hSOenrhI3_ID9Ow
[2018-03-19 13:21:10,018][WARN ][transport.netty          ] [node-1] exception caught on transport layer [[id: 0xf837d790, /172.22.59.60:50182 :> /172.22.59.65:9300]], closing connection
java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 1120
	at org.jboss.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
	at org.elasticsearch.transport.netty.ChannelBufferStreamInput.read(ChannelBufferStreamInput.java:91)
	at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:117)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
	at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.lang.Thread.run(Unknown Source)
[2018-03-19 13:21:23,113][WARN ][discovery                ] [node-1] waited for 30s and no initial state was set by the discovery
[2018-03-19 13:21:44,161][INFO ][cluster.service          ] [node-1] detected_master {node-2}{RNJ2rmztSXGWwTN6SyumnA}{172.22.59.59}{172.22.59.59:9300}, added {{node-2}{RNJ2rmztSXGWwTN6SyumnA}{172.22.59.59}{172.22.59.59:9300},{node-3}{e1kL9or0SmimXWeXlgQ3-Q}{172.22.59.60}{172.22.59.60:9300},{node-4}{XVfk7ZfWRMOsb6nubZliMQ}{172.22.59.61}{172.22.59.61:9300},}, reason: zen-disco-receive(from master [{node-2}{RNJ2rmztSXGWwTN6SyumnA}{172.22.59.59}{172.22.59.59:9300}])
[2018-03-19 13:21:44,911][INFO ][cluster.service          ] [node-1] added {{node-0}{eQwCjn7jTEKHEFFE7h0etA}{172.22.59.62}{172.22.59.62:9300}{master=true},}, reason: zen-disco-receive(from master [{node-2}{RNJ2rmztSXGWwTN6SyumnA}{172.22.59.59}{172.22.59.59:9300}])
[2018-03-19 13:24:45,896][INFO ][http                     ] [node-1] publish_address {stciperf1/172.22.59.65:9200}, bound_addresses {[2404:f801:28:fc0d:7d36:d3ac:adb:4a25]:9200}, {172.22.59.65:9200}, {[2404:f801:28:fc0d:3d65:7c19:16b1:4934]:9200}, {[fe80::3d65:7c19:16b1:4934]:9200}
[2018-03-19 13:24:45,896][INFO ][node                     ] [node-1] started
[2018-03-19 13:40:42,405][INFO ][cluster.service          ] [node-1] removed {{node-0}{eQwCjn7jTEKHEFFE7h0etA}{172.22.59.62}{172.22.59.62:9300}{master=true},}, reason: zen-disco-receive(from master [{node-2}{RNJ2rmztSXGWwTN6SyumnA}{172.22.59.59}{172.22.59.59:9300}])
[2018-03-19 13:43:21,480][INFO ][bootstrap                ] running graceful exit on windows
[2018-03-19 13:43:21,481][INFO ][node                     ] [node-1] stopping ...
[2018-03-19 13:43:21,646][WARN ][indices.cluster          ] [node-1] [[flightnb_67][4]] marking and sending shard failed due to [failed recovery]
RecoveryFailedException[[flightnb_67][4]: Recovery failed from {node-2}{RNJ2rmztSXGWwTN6SyumnA}{172.22.59.59}{172.22.59.59:9300} into {node-1}{iEWaSX9hSOenrhI3_ID9Ow}{172.22.59.65}{stciperf1/172.22.59.65:9300}{master=false}]; nested: TransportException[transport stopped, action: internal:index/shard/recovery/start_recovery];
	at org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:258)
	at org.elasticsearch.indices.recovery.RecoveryTarget.access$1100(RecoveryTarget.java:69)
	at

This sounds more like a general error you are having with your Elasticsearch cluster, that is not linked to Kibana specific, if you say your cluster is responding slow and you are not even unable to delete indexes. I will move this to the Elasticsearch topic, since there might be more people being able to help you appropriately.

In elastic search though it is responding slowely, I am able to see that all nodes are there in green colour but Kibana is giving me heap error.

In Kibana I am getting below error :
slight_smile:
g [22:59:33.488] [info][status][plugin:sense] Status changed from uninitialized to green - Ready
log [22:59:33.503] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [22:59:33.519] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [22:59:33.519] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [22:59:33.519] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [22:59:33.535] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [22:59:33.535] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [22:59:33.535] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [22:59:33.535] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
log [22:59:33.550] [info][listening] Server running at http://0.0.0.0:5601
log [22:59:35.035] [error][status][plugin:elasticsearch] Status changed from yellow to red - Request Timeout after 1500ms

This is because Kibana has several timeouts configured and will turn it's status to red, if e.g. Elasticsearch is responding to slow. You can increase those timeout via the elasticsearch.pingTimeout or elasticsearch.requestTimeout in your kibana.yml, but if your Elasticsearch cluster usually exceeds these timeouts, there might still be another issue.

I tried increasing ping timeout to 150000 ms but still it saying Request Timeout after 1500ms.

Everything was working fine and issue started happening from yesterday.

any suggestions?

As of now I have moved my Kibana to other node and it is working fine. Can anyone suggest something? Is it hardware issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.