ELK not working after upgrading from v6.7 to v7.1

Hello,

Recently I have upgraded my ELK stack setup from 6.7 to 7.1 and verified that the versions of kibana, elasticsearch and logstash are up to date.

Problem:
After upgrading to v7.1, I am unable to open kibana page. Found errors in kibana.

Error message details:

  1. Message while opening kibana application- "Kibana server is not ready yet"

  2. Output of "curl -X GET "localhost:9200/_cluster/health" -
    {"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}

  3. Kibana error message trace: (Note: errors for few plugins which are not compatible with the v7.1 are identified and resolved by removing the plugins and still getting the below errors).

["status","plugin:beats_management@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-13T13:49:19Z","tags":["status","plugin:maps@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-13T13:49:19Z","tags":["status","plugin:index_management@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-13T13:49:19Z","tags":["status","plugin:index_lifecycle_management@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-13T13:49:19Z","tags":["status","plugin:rollup@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-13T13:49:19Z","tags":["status","plugin:remote_clusters@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-13T13:49:19Z","tags":["status","plugin:cross_cluster_replication@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-06-13T13:49:20Z","tags":["status","plugin:reporting@7.1.1","error"],"pid":80354,"state":"red","message":"Status changed from uninitialized to red - [data] Elasticsearch cluster did not respond with license information.","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"error","@timestamp":"2019-06-13T13:49:49Z","tags":["warning","process"],"pid":80354,"level":"error","error":{"message":"Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout. (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)","name":"UnhandledPromiseRejectionWarning","stack":"UnhandledPromiseRejectionWarning: Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout. (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)\n at emitWarning (internal/process/promises.js:81:15)\n at emitPromiseRejectionWarnings (internal/process/promises.js:120:9)\n at process._tickCallback (internal/process/next_tick.js:69:34)"},"message":"Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout. (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"}
{"type":"error","@timestamp":"2019-06-13T13:49:49Z","tags":["warning","process"],"pid":80354,"level":"error","error":{"message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)","name":"UnhandledPromiseRejectionWarning","stack":"Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout. (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"},"message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)"}
{"type":"log","@timestamp":"2019-06-13T13:49:50Z","tags":["reporting","warning"],"pid":80354,"message":"Could not retrieve cluster settings, because of Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2019-06-13T13:49:50Z","tags":["warning","task_manager"],"pid":80354,"message":"PollError Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2019-06-13T13:49:50Z","tags":["warning","maps"],"pid":80354,"message":"Error scheduling telemetry task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized!"}
{"type":"log","@timestamp":"2019-06-13T13:49:50Z","tags":["warning","telemetry"],"pid":80354,"message":"Error scheduling task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized!"}

Hi,

I am also getting below error message in the elasticsearch trace even if the cluster has one master node and one data node.

[2019-06-13T20:09:16,330][WARN ][o.e.c.c.ClusterFormationFailureHelper] [elkt1] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered ; discovery will continue using [192.168.1.61:9300] from hosts providers and [{elkt1}{4sZaM301RFWsFv-kdx7HGw}{Isj-FI25QxS3J3si-NbINw}{192.168.1.26}{192.168.1.26:9300}{ml.machine_memory=4125442048, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

In the instructions for upgrading is the following:

If upgrading from a 6.x cluster, you must configure cluster bootstrapping by setting the cluster.initial_master_nodes setting.

It looks like you did not do this.

1 Like

Dear David,

Many thanks for your suggestion. After giving the provided entry in elasticsearch.yml, everything started working fine.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.