Kibana server is not ready yet - kibana 6.7

Hi,

I have been using Kibana 6.7 and Elastic Search 6.7. Both my services are running fine but I am getting "Kibana server is not ready yet" error.

I tried deleting all four indices.

this is snapshot of my recent kibana logs:

"{"type":"log","@timestamp":"2019-04-25T18:33:43Z","tags":["fatal","root"],"pid":19331,"message":"{ Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout. (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)\n status: undefined,\n displayName: 'RequestTimeout',\n message: 'Request Timeout after 30000ms',\n body: undefined,\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { statusCode: 503,\n error: 'Service Unavailable',\n message: 'Request Timeout after 30000ms' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
"

Any more sugestion?

Can you post your full kibana log file?

{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:reporting@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - Unable to connect to Elasticsearch.","prevState":"red","prevMsg":"Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:elasticsearch@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - Unable to connect to Elasticsearch.","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["error","elasticsearch","data"],"pid":19437,"message":"Request error, retrying\nGET http://localhost:9200/_xpack => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:xpack_main@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:graph@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:spaces@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:searchprofiler@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:ml@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:tilemap@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:watcher@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:grokdebugger@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:logstash@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:beats_management@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:maps@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:index_management@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:index_lifecycle_management@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:rollup@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:remote_clusters@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:cross_cluster_replication@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:41Z","tags":["status","plugin:reporting@6.7.0","error"],"pid":19437,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-04-25T18:47:43Z","tags":["error","elasticsearch","admin"],"pid":19437,"message":"Request error, retrying\nGET http://localhost:9200/_template/.kibana_task_manager?include_type_name=true&filter_path=.version => read ECONNRESET"}
{"type":"log","@timestamp":"2019-04-25T18:47:46Z","tags":["error","elasticsearch","data"],"pid":19437,"message":"Request error, retrying\nGET http://localhost:9200/_xpack => read ECONNRESET"}
{"type":"log","@timestamp":"2019-04-25T18:47:47Z","tags":["error","elasticsearch","admin"],"pid":19437,"message":"Request error, retrying\nPOST http://localhost:9200/.reporting-
/esqueue/_search => read ECONNRESET"}
{"type":"log","@timestamp":"2019-04-25T18:47:48Z","tags":["error","elasticsearch","admin"],"pid":19437,"message":"Request error, retrying\nPOST http://localhost:9200/.reporting-*/esqueue/_search => read ECONNRESET"}

Thanks. What happens if you navigate to http://localhost:9200 in your browser, from the same machine that Kibana is running on?

Looks like your ES is down or having auth issues. Try the following:

  1. Check your ES server health and make sure it's up
$ curl -X GET "localhost:9200/_cluster/health"

  1. Check kibana.yml and make sure the auth parameters are correct. Also, mark server.host as 0.0.0.0 if you're trying to access Kibana from a different machine.

  2. Are you behind a proxy? If yes, ECONNRESET might be due to that.

1 Like

Kibana and ES are both running on same server.

I am running following command
Command : curl -X GET "localhost:9200/_cluster/health"
Output: curl: (7) couldn't connect to host

server.host = "name of server"

My kibana and ES both running on centos.

After restarting ES and Kibana I am getting following

curl -X GET "localhost:9200/_cluster/health"

{"cluster_name":"es-lp-dev","status":"red","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":115,"active_shards":115,"relocating_shards":0,"initializing_shards":4,"unassigned_shards":2777,"delayed_unassigned_shards":0,"number_of_pending_tasks":5,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":76,"active_shards_percent_as_number":3.970994475138122}[

My ES yml file looks like:

I commented out xpack security auth:

Let anyone do anything

#xpack.security.authc:
#anonymous:
#username: _es_anonymous_user
#roles: superuser
#authz_exception: true

It looks like you have unassigned shards yet. If you rerun the health check, is the unassigned number (2777) lower?

Kibana will not run be "ready" until Elasticsearch is available with a "green" status.

Yes unassigned no of shards are 1447. and status is yellow. how to fix this?

Sorry I was incorrect -- "yellow" is generally OK for kibana. What are your Kibana logs showing now?

ES Logs:

tail -f es-lx2-dev.log
[2019-04-25T14:00:50,876][INFO ][o.e.p.PluginsService ] [4MRxPZh] loaded module [x-pack-sql]
[2019-04-25T14:00:50,876][INFO ][o.e.p.PluginsService ] [4MRxPZh] loaded module [x-pack-upgrade]
[2019-04-25T14:00:50,876][INFO ][o.e.p.PluginsService ] [4MRxPZh] loaded module [x-pack-watcher]
[2019-04-25T14:00:50,877][INFO ][o.e.p.PluginsService ] [4MRxPZh] no plugins loaded
[2019-04-25T14:00:56,149][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [4MRxPZh] [controller/20603] [Main.cc@109] controller (64 bit): Version 6.7.0 (Build d74ae2ac01b10d) Copyright (c) 2019 Elasticsearch BV
[2019-04-25T14:00:58,263][INFO ][o.e.d.DiscoveryModule ] [4MRxPZh] using discovery type [zen] and host providers [settings]
[2019-04-25T14:00:59,253][INFO ][o.e.n.Node ] [4MRxPZh] initialized
[2019-04-25T14:00:59,254][INFO ][o.e.n.Node ] [4MRxPZh] starting ...
[2019-04-25T14:00:59,466][INFO ][o.e.t.TransportService ] [4MRxPZh] publish_address {10.0.8.2:9300}, bound_addresses {127.0.0.1:9300}, {10.0.8.254:9300}
[2019-04-25T14:00:59,817][INFO ][o.e.b.BootstrapChecks ] [4MRxPZh] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-04-25T14:01:02,898][INFO ][o.e.c.s.MasterService ] [4MRxPZh] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {4MRxPZh}{4MRxPZh8QGOrVepj2M4Rbw}{_RABgW7xRNW6jAEsqNaa5Q}{10.0.8.254}{10.0.8.254:9300}{ml.machine_memory=8389656576, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-04-25T14:01:02,905][INFO ][o.e.c.s.ClusterApplierService] [4MRxPZh] new_master {4MRxPZh}{4MRxPZh8QGOrVepj2M4Rbw}{_RABgW7xRNW6jAEsqNaa5Q}{10.0.8.254}{10.0.8.2:9300}{ml.machine_memory=8389656576, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {4MRxPZh}{4MRxPZh8QGOrVepj2M4Rbw}{_RABgW7xRNW6jAEsqNaa5Q}{10.0.8.254}{10.0.8.254:9300}{ml.machine_memory=8389656576, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-04-25T14:01:02,981][INFO ][o.e.h.n.Netty4HttpServerTransport] [4MRxPZh] publish_address {10.0.8.254:9200}, bound_addresses {127.0.0.1:9200}, {10.0.8.254:9200}
[2019-04-25T14:01:02,981][INFO ][o.e.n.Node ] [4MRxPZh] started
[2019-04-25T14:01:09,307][INFO ][o.e.l.LicenseService ] [4MRxPZh] license [396dc529-e29b-4ccf-baff-803e76acb78c] mode [trial] - valid
[2019-04-25T14:01:09,309][WARN ][o.e.l.LicenseService ] [4MRxPZh] license [396dc529-e29b-4ccf-baff-803e76acb78c] - expired
[2019-04-25T14:01:09,312][WARN ][o.e.l.LicenseService ] [4MRxPZh] LICENSE [EXPIRED] ON [SATURDAY, DECEMBER 23, 2017].

IF YOU HAVE A NEW LICENSE, PLEASE UPDATE IT. OTHERWISE, PLEASE REACH OUT TO

YOUR SUPPORT CONTACT.

COMMERCIAL PLUGINS OPERATING WITH REDUCED FUNCTIONALITY

- security

- Cluster health, cluster stats and indices stats operations are blocked

- All data operations (read and write) continue to work

- watcher

- PUT / GET watch APIs are disabled, DELETE watch API continues to work

- Watches execute and write to the history

- The actions of the watches don't execute

- monitoring

- The agent will stop collecting cluster and indices metrics

- The agent will stop automatically cleaning indices older than [xpack.monitoring.history.duration]

- graph

- Graph explore APIs are disabled

- ml

- Machine learning APIs are disabled

- logstash

- Logstash will continue to poll centrally-managed pipelines

- beats

- Beats will continue to poll centrally-managed configuration

- deprecation

- Deprecation APIs are disabled

- upgrade

- Upgrade API is disabled

- sql

- SQL support is disabled

- rollup

- Creating and Starting rollup jobs will no longer be allowed.

- Stopping/Deleting existing jobs, RollupCaps API and RollupSearch continue to function.

[2019-04-25T14:01:09,318][INFO ][o.e.g.GatewayService ] [4MRxPZh] recovered [497] indices into cluster_state

kibana logs:

{"type":"log","@timestamp":"2019-04-25T20:43:58Z","tags":["status","plugin:beats_management@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"red","prevMsg":"Service Unavailable"}
{"type":"log","@timestamp":"2019-04-25T20:43:58Z","tags":["status","plugin:maps@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"red","prevMsg":"Service Unavailable"}
{"type":"log","@timestamp":"2019-04-25T20:43:58Z","tags":["status","plugin:index_management@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"red","prevMsg":"Service Unavailable"}
{"type":"log","@timestamp":"2019-04-25T20:43:58Z","tags":["status","plugin:index_lifecycle_management@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"red","prevMsg":"Service Unavailable"}
{"type":"log","@timestamp":"2019-04-25T20:43:58Z","tags":["status","plugin:rollup@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"red","prevMsg":"Service Unavailable"}
{"type":"log","@timestamp":"2019-04-25T20:43:58Z","tags":["status","plugin:remote_clusters@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"red","prevMsg":"Service Unavailable"}
{"type":"log","@timestamp":"2019-04-25T20:43:58Z","tags":["status","plugin:cross_cluster_replication@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"red","prevMsg":"Service Unavailable"}
{"type":"log","@timestamp":"2019-04-25T20:43:59Z","tags":["status","plugin:reporting@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from uninitialized to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-04-25T20:44:32Z","tags":["status","plugin:spaces@6.7.0","error"],"pid":20008,"state":"red","message":"Status changed from red to red - Request Timeout after 30000ms","prevState":"red","prevMsg":"[data] Elasticsearch cluster did not respond with license information."}
{"type":"log","@timestamp":"2019-04-25T20:44:32Z","tags":["fatal","root"],"pid":20008,"message":"{ Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout. (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)\n status: undefined,\n displayName: 'RequestTimeout',\n message: 'Request Timeout after 30000ms',\n body: undefined,\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { statusCode: 503,\n error: 'Service Unavailable',\n message: 'Request Timeout after 30000ms' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}

yesterday It was showing error X-Pack older version i running with ES version 6.7. So I uninstalled X-Pack and commented out x-pack settings in.yml file.

Can you post your FULL kibana.yml and elastic.yml file here? Please format it correctly by putting all code inside a pair of ```
OR
post them on GitHub and share the link.

git@github.com:ashnav1msit/ELK.git

please clone files from above directory

Any further suggestions.

Check if your Elasticsearch is up.

curl -X GET "localhost:9200/_cluster/health"

The status should be green or yellow. If it says red, then your ES is down. In that case, check if you're passing auth credentials correctly to ES (if you're using auth, at all). Apart from that, your elastic.yml looks good.

Once your ES is up, only then move to Kibana.

I'd recommend following changes in your kibana.yml file.

server.host: 0.0.0.0

# If your ES is protected, put it below
elasticsearch.username: <username>
elasticsearch.password: <password>

# If you're using a .pem file, put its location below
elasticsearch.ssl.cert: /path/to/your/client.crt

If everything still fails, remove x-pack plugin entirely and try to run native ES only.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.