Fatal Error -- Kibana: Unable to connect to Elasticsearch

Please help me, i am new to this ELK

Is Elasticsearch running? On the same machine as Kibana? Did you change anything in your Kibana configuration? What if you visit http://hostname:9200 (where hostname is the machine where Elasticsearch should be running)?

elasticsearch is running..

I didn't change anything in Kibana configuration, but I do shutdown some nodes using kopf, then suddenly this thing happens, before that Kibana can't load any data from elasticsearch, now it shows that kind of message in my first post of this thread.

Next time, please copy/paste text when you can instead of posting screenshots.

Sure, ES is running but with a 503 status. That's bad. Is there anything interesting in your ES logs that could explain what's going on?

I am clueless right now.. what should I do? :sob:

Kibana: Unable to connect to Elasticsearch

Error: Unable to connect to Elasticsearch
Error: unknown error
at respond (http://103.18.3.245:5601/index.js?_b=6004:81693:15)
at checkRespForFailure (http://103.18.3.245:5601/index.js?_b=6004:81659:7)
at http://103.18.3.245:5601/index.js?_b=6004:80322:7
at wrappedErrback (http://103.18.3.245:5601/index.js?_b=6004:20897:78)
at wrappedErrback (http://103.18.3.245:5601/index.js?_b=6004:20897:78)
at wrappedErrback (http://103.18.3.245:5601/index.js?_b=6004:20897:78)
at http://103.18.3.245:5601/index.js?_b=6004:21030:76
at Scope.$eval (http://103.18.3.245:5601/index.js?_b=6004:22017:28)
at Scope.$digest (http://103.18.3.245:5601/index.js?_b=6004:21829:31)
at Scope.$apply (http://103.18.3.245:5601/index.js?_b=6004:22121:24)

Look in your Elasticsearch logs. They're typically stored in /var/log/elasticsearch.

[2015-06-27 00:36:32,193][DEBUG][http.netty ] [esnode] Caught exception while handling client http traffic, closing connection [$
java.nio.channels.ClosedChannelException
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java$
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.ja$
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.ja$
at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)

.
.
.

[2015-06-27 14:09:31,695][DEBUG][cluster.service ] [esnode] processing [zen-disco-receive(from master [[esnode1][EApdsfvGT-mqegUtMxp3uQ][dnslogger00][inet[/10.61.132.42:9300]]{master=true}])]: execute
[2015-06-27 14:09:31,695][DEBUG][cluster.service ] [esnode] cluster state updated, version [72], source [zen-disco-receive(from master [[esnode1][EApdsfvGT-mqegUtMxp3uQ][dnslogger00][inet[/10.61.132.42:9300]]{master=true}])]
[2015-06-27 14:09:31,695][DEBUG][cluster.service ] [esnode] set local cluster state to version 72
[2015-06-27 14:09:31,700][DEBUG][cluster.service ] [esnode] processing [zen-disco-receive(from master [[esnode1][EApdsfvGT-mqegUtMxp3uQ][dnslogger00][inet[/10.61.132.42:9300]]{master=true}])]: done applying updated cluster_state (version: 72)
[2015-06-27 14:09:37,755][DEBUG][discovery.zen.publish ] [esnode] received cluster state version 73
[2015-06-27 14:09:37,756][DEBUG][cluster.service ] [esnode] processing [zen-disco-receive(from master [[esnode1][EApdsfvGT-mqegUtMxp3uQ][dnslogger00][inet[/10.61.132.42:9300]]{master=true}])]: execute
[2015-06-27 14:09:37,756][DEBUG][cluster.service ] [esnode] cluster state updated, version [73], source [zen-disco-receive(from master [[esnode1][EApdsfvGT-mqegUtMxp3uQ][dnslogger00][inet[/10.61.132.42:9300]]{master=true}])]
[2015-06-27 14:09:37,756][DEBUG][cluster.service ] [esnode] set local cluster state to version 73
[2015-06-27 14:09:37,761][DEBUG][cluster.service ] [esnode] processing [zen-disco-receive(from master [[esnode1][EApdsfvGT-mqegUtMxp3uQ][dnslogger00][inet[/10.61.132.42:9300]]{master=true}])]: done applying updated cluster_state (version: 73)
[2015-06-27 14:57:19,690][INFO ][node ] [esnode] stopping ...
[2015-06-27 14:57:19,710][DEBUG][discovery.zen.fd ] [esnode] [master] stopping fault detection against master [[esnode1][EApdsfvGT-mqegUtMxp3uQ][dnslogger00][inet[/10.61.132.42:9300]]{master=true}], reason [zen disco stop]
[2015-06-27 14:57:19,771][DEBUG][marvel.agent ] [esnode] shutting down worker, exporting pending event
[2015-06-27 14:57:19,771][DEBUG][marvel.agent ] [esnode] worker shutdown
[2015-06-27 14:57:19,772][INFO ][node ] [esnode] stopped
[2015-06-27 14:57:19,772][INFO ][node ] [esnode] closing ...
[2015-06-27 14:57:19,785][INFO ][node ] [esnode] closed

Why did you remove some nodes?