After Upgrade from 6.7 to Kibana 7.0 Service cannot Up


(Woody) #1

Hi Guys,

After upgrade from 6.7 to 7.0, my Kibana cannot up and haven't the error message below,
I got one master node and one data node.

Apr 15 17:33:25 z3aries-n10 systemd: Starting Kibana...
Apr 15 17:33:27 z3aries-n10 logstash: warning: thread "Ruby-0-Thread-5: :1" terminated with exception (report_on_exception is true):
Apr 15 17:33:27 z3aries-n10 logstash: LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError: Got response code '503' contacting Elasticsearch at URL 'http://10.3.3.30:9200/logstash'
Apr 15 17:33:27 z3aries-n10 logstash: perform_request at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80
Apr 15 17:33:27 z3aries-n10 logstash: perform_request_to_url at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291
Apr 15 17:33:27 z3aries-n10 logstash: perform_request at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278
Apr 15 17:33:27 z3aries-n10 logstash: with_connection at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373
Apr 15 17:33:27 z3aries-n10 logstash: perform_request at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277
Apr 15 17:33:27 z3aries-n10 logstash: Pool at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285
Apr 15 17:33:27 z3aries-n10 logstash: exists? at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:341
Apr 15 17:33:27 z3aries-n10 logstash: rollover_alias_exists? at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:359
Apr 15 17:33:27 z3aries-n10 logstash: maybe_create_rollover_alias at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:89
Apr 15 17:33:27 z3aries-n10 logstash: setup_ilm at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:11
Apr 15 17:33:27 z3aries-n10 logstash: setup_after_successful_connection at /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/common.rb:51
Apr 15 17:33:27 z3aries-n10 logstash: [2019-04-15T17:33:27,834][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError: LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:inperform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:inwith_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:inblock in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:341:in exists?'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:359:inrollover_alias_exists?'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:89:in maybe_create_rollover_alias'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/ilm.rb:11:insetup_ilm'", "/usr/share/logstash/vendor/bund
Apr 15 17:33:27 z3aries-n10 logstash: le/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/common.rb:51:in `block in setup_after_successful_connection'"]}
Apr 15 17:33:27 z3aries-n10 logstash: [2019-04-15T17:33:27,891][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Apr 15 17:33:28 z3aries-n10 systemd: logstash.service: main process exited, code=exited, status=1/FAILURE
Apr 15 17:33:28 z3aries-n10 systemd: Unit logstash.service entered failed state.
Apr 15 17:33:28 z3aries-n10 systemd: logstash.service failed.
Apr 15 17:33:28 z3aries-n10 systemd: logstash.service holdoff time over, scheduling restart.

any idea what's wrong?


(Mark Walkom) #2

That log is from Logstash, not Kibana?


(Woody) #3

Sorry.. I get the log from /var/log/messages, I saw the Kibana service keep restart.
I thought is Kibana service issues.
Any how, any idea what's wrong?


(Woody) #4

this is another logs,

Apr 15 17:43:20 z3aries-n10 systemd: Starting Kibana...
Apr 15 17:43:25 z3aries-n10 logstash: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
Apr 15 17:43:26 z3aries-n10 logstash: [2019-04-15T17:43:26,502][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.0.0"}
Apr 15 17:43:27 z3aries-n10 kibana: {"type":"log","@timestamp":"2019-04-15T09:43:27Z","tags":["plugin","warning"],"pid":12425,"path":"/usr/share/kibana/src/legacy/core_plugins/ems_util","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/ems_util"}
Apr 15 17:43:28 z3aries-n10 kibana: {"type":"log","@timestamp":"2019-04-15T09:43:28Z","tags":["fatal","root"],"pid":12425,"message":"{ ValidationError: child "elasticsearch" fails because ["url" is not allowed]\n at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:196:19)\n at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/types/any/index.js:675:31)\n at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:146:23)\n at Config._commit (/usr/share/kibana/src/legacy/server/config/config.js:139:35)\n at Config.set (/usr/share/kibana/src/legacy/server/config/config.js:108:10)\n at Config.extendSchema (/usr/share/kibana/src/legacy/server/config/config.js:81:10)\n at extendConfigService (/usr/share/kibana/src/legacy/plugin_discovery/plugin_config/extend_config_service.js:45:10) name: 'ValidationError' }"}
Apr 15 17:43:28 z3aries-n10 kibana: FATAL ValidationError: child "elasticsearch" fails because ["url" is not allowed]
Apr 15 17:43:28 z3aries-n10 systemd: kibana.service: main process exited, code=exited, status=1/FAILURE
Apr 15 17:43:28 z3aries-n10 systemd: Unit kibana.service entered failed state.
Apr 15 17:43:28 z3aries-n10 systemd: kibana.service failed.
^[Apr 15 17:43:29 z3aries-n10 systemd: kibana.service holdoff time over, scheduling restart.


(Mark Walkom) #5

Might be a problem with your config.
Can you post it, making sure you format it with the </> button.


(Woody) #6

elasticsearch.yml [master node]

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
cluster.name: elk.stack
node.name: aries-n01
network.host: 10.3.3.30
http.port: 9200
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["10.3.3.30", "10.3.3.31"]

elasticsearch.yml [data node]

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
cluster.name: elk.stack
node.name: aries-n02
network.host: 10.3.3.31
http.port: 9200
node.master: false
node.data: true
discovery.zen.ping.unicast.hosts: ["10.3.3.30", "10.3.3.31"]

kibana.yml [master node]

server.host: "10.3.3.30"
elasticsearch.url: "http://10.3.3.30:9200"

kibana.yml [data node]

server.host: "10.3.3.31"
elasticsearch.url: "http://10.3.3.31:9200"

logstash.yml [master node]

path.data: /var/lib/logstash
path.logs: /var/log/logstash

logstash.yml [data node]

path.data: /var/lib/logstash
path.logs: /var/log/logstash

But, above config in 6.7 without any issues. The issue happen after upgrade from 6.7 to 7.0


(Woody) #7

Any one can assist on my upgrade issue?


(Petr Simik) #8

I have exactly the same issue, did you managed how to resolved it?
I also upgraded from 6.7 to 7.0 all fine except start of Kibana
kibana keeps restarting & reports to /var/log/messages

xxkibana01 kibana: FATAL ValidationError: child "elasticsearch" fails because ["url" is not allowed]


(Petr Simik) #9

I found the problem
they changed
/etc/kibana/kibana.yml

in version 6.xx
elasticsearch.url: "http://localhost:9200"

in version 7.xx
elasticsearch.hosts: ["http://localhost:9200"]

just rename url to host and my problem was resolved


(Tiago Costa) #10

Exactly, that is the explanation. Since kibana 7 we moved away from the option elasticsearch.url that is now elasticsearch.hosts. More info can be found here https://www.elastic.co/guide/en/kibana/7.x/breaking-changes-7.0.html#_literal_elasticsearch_url_literal_is_no_longer_valid