After Upgrade from 6.7 to Kibana 7.0 Service cannot Up

Hi Guys,

After upgrade from 6.7 to 7.0, my Kibana cannot up and haven't the error message below,
I got one master node and one data node.

Apr 15 17:33:25 z3aries-n10 systemd: Starting Kibana...
Apr 15 17:33:27 z3aries-n10 logstash: warning: thread "Ruby-0-Thread-5: :1" terminated with exception (report_on_exception is true):
Apr 15 17:33:27 z3aries-n10 logstash: LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError: Got response code '503' contacting Elasticsearch at URL 'http://10.3.3.30:9200/logstash'
Apr 15 17:33:27 z3aries-n10 logstash: perform_request at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80
Apr 15 17:33:27 z3aries-n10 logstash: perform_request_to_url at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291
Apr 15 17:33:27 z3aries-n10 logstash: perform_request at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278
Apr 15 17:33:27 z3aries-n10 logstash: with_connection at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373
Apr 15 17:33:27 z3aries-n10 logstash: perform_request at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277
Apr 15 17:33:27 z3aries-n10 logstash: Pool at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285
Apr 15 17:33:27 z3aries-n10 logstash: exists? at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:341
Apr 15 17:33:27 z3aries-n10 logstash: rollover_alias_exists? at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:359
Apr 15 17:33:27 z3aries-n10 logstash: maybe_create_rollover_alias at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:89
Apr 15 17:33:27 z3aries-n10 logstash: setup_ilm at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:11
Apr 15 17:33:27 z3aries-n10 logstash: setup_after_successful_connection at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/common.rb:51
Apr 15 17:33:27 z3aries-n10 logstash: [2019-04-15T17:33:27,834][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError: LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:inperform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:inwith_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:inblock in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:341:in exists?'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:359:inrollover_alias_exists?'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:89:in maybe_create_rollover_alias'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:11:insetup_ilm'", "/usr/share/logstash/vendor/bund
Apr 15 17:33:27 z3aries-n10 logstash: le/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/common.rb:51:in `block in setup_after_successful_connection'"]}
Apr 15 17:33:27 z3aries-n10 logstash: [2019-04-15T17:33:27,891][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Apr 15 17:33:28 z3aries-n10 systemd: logstash.service: main process exited, code=exited, status=1/FAILURE
Apr 15 17:33:28 z3aries-n10 systemd: Unit logstash.service entered failed state.
Apr 15 17:33:28 z3aries-n10 systemd: logstash.service failed.
Apr 15 17:33:28 z3aries-n10 systemd: logstash.service holdoff time over, scheduling restart.

any idea what's wrong?

That log is from Logstash, not Kibana?

Sorry.. I get the log from /var/log/messages, I saw the Kibana service keep restart.
I thought is Kibana service issues.
Any how, any idea what's wrong?

this is another logs,

Apr 15 17:43:20 z3aries-n10 systemd: Starting Kibana...
Apr 15 17:43:25 z3aries-n10 logstash: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
Apr 15 17:43:26 z3aries-n10 logstash: [2019-04-15T17:43:26,502][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.0.0"}
Apr 15 17:43:27 z3aries-n10 kibana: {"type":"log","@timestamp":"2019-04-15T09:43:27Z","tags":["plugin","warning"],"pid":12425,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
Apr 15 17:43:28 z3aries-n10 kibana: {"type":"log","@timestamp":"2019-04-15T09:43:28Z","tags":["fatal","root"],"pid":12425,"message":"{ ValidationError: child "elasticsearch" fails because ["url" is not allowed]\n at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:196:19)\n at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/types/any/index.js:675:31)\n at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:146:23)\n at Config._commit (/usr/share/kibana/src/legacy/server/config/config.js:139:35)\n at Config.set (/usr/share/kibana/src/legacy/server/config/config.js:108:10)\n at Config.extendSchema (/usr/share/kibana/src/legacy/server/config/config.js:81:10)\n at extendConfigService (/usr/share/kibana/src/legacy/plugin_discovery/plugin_config/extend_config_service.js:45:10) name: 'ValidationError' }"}
Apr 15 17:43:28 z3aries-n10 kibana: FATAL ValidationError: child "elasticsearch" fails because ["url" is not allowed]
Apr 15 17:43:28 z3aries-n10 systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
Apr 15 17:43:28 z3aries-n10 systemd: Unit kibana.service entered failed state.
Apr 15 17:43:28 z3aries-n10 systemd: kibana.service failed.
^[Apr 15 17:43:29 z3aries-n10 systemd: kibana.service holdoff time over, scheduling restart.

Might be a problem with your config.
Can you post it, making sure you format it with the </> button.

elasticsearch.yml [master node]

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
cluster.name: elk.stack
node.name: aries-n01
network.host: 10.3.3.30
http.port: 9200
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["10.3.3.30", "10.3.3.31"]

elasticsearch.yml [data node]

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
cluster.name: elk.stack
node.name: aries-n02
network.host: 10.3.3.31
http.port: 9200
node.master: false
node.data: true
discovery.zen.ping.unicast.hosts: ["10.3.3.30", "10.3.3.31"]

kibana.yml [master node]

server.host: "10.3.3.30"
elasticsearch.url: "http://10.3.3.30:9200"

kibana.yml [data node]

server.host: "10.3.3.31"
elasticsearch.url: "http://10.3.3.31:9200"

logstash.yml [master node]

path.data: /var/lib/logstash
path.logs: /var/log/logstash

logstash.yml [data node]

path.data: /var/lib/logstash
path.logs: /var/log/logstash

But, above config in 6.7 without any issues. The issue happen after upgrade from 6.7 to 7.0

Any one can assist on my upgrade issue?

1 Like

I have exactly the same issue, did you managed how to resolved it?
I also upgraded from 6.7 to 7.0 all fine except start of Kibana
kibana keeps restarting & reports to /var/log/messages

xxkibana01 kibana: FATAL ValidationError: child "elasticsearch" fails because ["url" is not allowed]

I found the problem
they changed
/etc/kibana/kibana.yml

in version 6.xx
elasticsearch.url: "http://localhost:9200"

in version 7.xx
elasticsearch.hosts: ["http://localhost:9200"]

just rename url to host and my problem was resolved

1 Like

Exactly, that is the explanation. Since kibana 7 we moved away from the option elasticsearch.url that is now elasticsearch.hosts. More info can be found here https://www.elastic.co/guide/en/kibana/7.x/breaking-changes-7.0.html#_literal_elasticsearch_url_literal_is_no_longer_valid

2 Likes

Hi, after i changed from elasticsearch.url to elasticsearch.hosts, the services still cannot up.

in elastic search cluster log,
[2019-04-23T17:33:07,383][WARN ][o.e.c.c.ClusterFormationFailureHelper] [aries-n01] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered []; discovery will continue using [10.3.3.31:9300] from hosts providers and [{aries-n01}{n-fNTnCQS_eTLm_W4P5Rag}vycMk_OJStmY8_sH7QqROQ}{10.3.3.30}{10.3.3.30:9300}{ml.machine_memory=8203431936, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

Guys, any idea what's wrong?

message log,
Apr 23 17:42:17 aries-n01 logstash: [2019-04-23T17:42:17,963][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.0.0"} Apr 23 17:42:19 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:19Z","tags":["plugin","warning"],"pid":5299,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:kibana@undefined","info"],"pid":5299,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:elasticsearch@undefined","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:xpack_main@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:graph@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:monitoring@7.0.0","info"],"pid":5299,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:spaces@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["security","warning"],"pid":5299,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["security","warning"],"pid":5299,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:security@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:searchprofiler@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:ml@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:tilemap@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:watcher@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:grokdebugger@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:dashboard_mode@7.0.0","info"],"pid":5299,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:logstash@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:beats_management@7.0.0","info"],"pid":5299,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:apm_oss@undefined","info"],"pid":5299,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:21 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:21Z","tags":["status","plugin:apm@7.0.0","info"],"pid":5299,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} Apr 23 17:42:22 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:22Z","tags":["reporting","warning"],"pid":5299,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"} Apr 23 17:42:22 aries-n01 kibana: {"type":"log","@timestamp":"2019-04-23T09:42:22Z","tags":["status","plugin:reporting@7.0.0","error"],"pid":5299,"state":"red","message":"Status changed from uninitialized to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"uninitialized","prevMsg":"uninitialized"}

Is it version 7.0 setup master and data node are different compare to version 6.7?

Is there anyone can assist?

I think no one want to bother me?

Hi,You can replace the elasticsearch.url: "http://localhost:9200" in your kibana configuration file with elasticsearch.hosts: ["http://localhost:9200"]

Hi,You can replace the elasticsearch.url: "http://localhost:9200" in your kibana configuration file with elasticsearch.hosts: ["http://localhost:9200"]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.