Kibana 4 start up is failing with error "Service Unavailable"

I am trying to start Kibana as a service on RHEL 6 but am facing with these error in logs:
{"name":"Kibana","hostname":"zllt3949","pid":119501,"level":50,"err":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/opt/app/kibana/web/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/opt/app/kibana/web/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector. (/opt/app/kibana/web/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/opt/app/kibana/web/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"},"msg":"","time":"2015-08-04T14:29:46.126Z","v":0}

But when i query elastic search for cluster status it shows me yellow status.

I tried changing elasticsearch_preserve_host to false and increasing my request_timeout to 10000 as i read in some forums but no impact.

Attaching my configuration file.

Anyone can suggest whats the problem

This error message means Kibana can't connect to your ES server for some reason. Are the hostname and port settings correct? Looks like right now it points to "localhost" port 9200.

The elasticsearch is up and running on the same server with port 9200.The issue happens intermittently.Sometimes it starts perfectly and sometimes it doesnt.Any reason which you can suggest?

Did you set

gateway.expected_nodes = N

however you had just M (< N) es nodes running?

I am observing the same issue as Kuldip and I do not think this has anything to do with ES not starting up properly. The issue is, if Kibana 4 is started immediately after ElasticSearch service is started, Kibana sometimes fails to try "hard enough" to connect to ES.

As far as I understand the documentation this should be solvable by request_timeout setting (I set it to 120000, or two minutes), but it is not.

The call stack that Kuldip gave you is spot-on. Exception thrown there should be handled if it happens within the request_timeout period.

What is the proper way to have an engineer take a peek? Should I open an issue about this? (sorry, KIbana/ES newbie)

Was a solution to this problem ever found? I am also facing the same issue. I have 3 instances ("172.31.56.19", "172.31.56.20", "172.31.56.21")of Elasticsearch (not including the one running on the same instance as Kibana). I have Kibana 4.3 running on the same instance as Elasticsearch 2.1.0 (node.master and node.data are set to false).

discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["172.31.5.65", "172.31.56.19", "172.31.56.20", "172.31.56.21"]

This is what I'm seeing in the Elasticsearch log on the Kibana instance:
[2015-12-07 23:41:38,103][INFO ][cluster.service ] [superman_172_31_5_65] detected_master {hulk_172_31_56_20}{RFx_uPKnTRqMPTdVyIJyPw}{172.31.56.20}{172.31.56.20:9300}{max_local_storage_nodes=1}, added {{hulk_172_31_56_20}{RFx_uPKnTRqMPTdVyIJyPw}{172.31.56.20}{172.31.56.20:9300}{max_local_storage_nodes=1},}, reason: zen-disco-receive(from master [{hulk_172_31_56_20}{RFx_uPKnTRqMPTdVyIJyPw}{172.31.56.20}{172.31.56.20:9300}{max_local_storage_nodes=1}])
[2015-12-07 23:41:38,105][INFO ][discovery.zen ] [superman_172_31_5_65] master_left [{hulk_172_31_56_20}{RFx_uPKnTRqMPTdVyIJyPw}{172.31.56.20}{172.31.56.20:9300}{max_local_storage_nodes=1}], reason [transport disconnected]
[2015-12-07 23:41:38,105][INFO ][discovery.zen ] [superman_172_31_5_65] failed to send join request to master [{hulk_172_31_56_20}{RFx_uPKnTRqMPTdVyIJyPw}{172.31.56.20}{172.31.56.20:9300}{max_local_storage_nodes=1}], reason [NodeDisconnectedException[[hulk_172_31_56_20][172.31.56.20:9300][internal:discovery/zen/join] disconnected]]
[2015-12-07 23:41:38,105][WARN ][discovery.zen ] [superman_172_31_5_65] master left (reason = transport disconnected), current nodes: {{spiderman_172_31_56_21}{BqRWYbXgSb6PdRLMNarBvg}{172.31.56.21}{172.31.56.21:9300}{max_local_storage_nodes=1},{ironman_172_31_56_19}{0L4C-MRyQ_iYIZ1ooKYlBg}{172.31.56.19}{172.31.56.19:9300}{max_local_storage_nodes=1},{superman_172_31_5_65}{L7UHgwrwSGiqxkBr_8tbzg}{172.31.5.65}{172.31.5.65:9300}{max_local_storage_nodes=1, data=false, master=false},}

[2015-12-07 23:41:38,106][WARN ][cluster.service          ] [superman_172_31_5_65] failed to notify ClusterStateListener```
```java.lang.IllegalStateException: master not available when registering auto-generated license
        at org.elasticsearch.license.plugin.core.LicensesService.requestTrialLicense(LicensesService.java:750)
        at org.elasticsearch.license.plugin.core.LicensesService.clusterChanged(LicensesService.java:484)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:494)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)```

Any suggestions?