Kibana server is not ready yet

I am slowly making headway with my installation of ELK Stack 7.13.4. I was at one point unable to connect and was showing a block, which turned out to be firewalls on the Red Hat Linux 8 operating systems. I have allowed the ports 5601, 9200, 9600 on my Elasticsearch and Kibana servers both tcp and udp to ensure that they have nothing stopping the flow of information. Now I have the dreaded "Kibana server is not ready yet" message.

I have made sure that there is only 1 instance of kibana running using the losf -i-np | grep LISTEN and systemctl status kibana.service

On the Elasticsearch machine if i run curl -XGET IP:9200 i am returned with connection refused. If I run the same curl command using localhost instead of an IP I am returned with the proper information.

On the Kibana server if I run the curl command with Localhost I get connection refused. With the IP I am greeted with "Kibana server is not ready yet"

The Kibana log repeats the following information:
["warnings","plugins","reporting",config"], "pid":"####","message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the /bin/kibana-encryption-keys command."
session cookies will be transmitted over insecure connections. This is not recommended."

In my kibana.yml file i have made sure that the Kibana IP is listed as well as the port. I have added the Elasticsearch as localhost under the hosturl and added the correct port.

I am not sure what my next steps are. As a note it is very hard to near impossible to pull information out of my Protected B environment.

It sounds like there's a misconfiguration between the Elasticsearch & kibana yaml files about where your services are pointing to which network (Elasticsearch to localhost & Kibana to IP). Usually, Dev will ask for your yaml files to cross-compare & point out exact issues.

Thinking out loud of next steps I'd investigate to supplement above for their review:

  • RHEL8 is supported on 7.13.4
  • You'd have RPM installed Kibana & Elasticsearch
    • what did you set Kibana.yml's elasticsearch.hosts to? is it appropriately referencing?
    • what did you set Elasticsearch.yml's network.host to (reference)?
    • have you overrode any Kibana defaults like server.basePath or server.host or server.port?
  • I'd set kibana.yml's logging.loggers.level:debug & restart the service to see what falls out.
    • The Generating a random key shouldn't be blocking start-up, but I'd expect something near the top of the start-up log to be more helpful.

Good Morning and thank you for the reply. In answer to your questions.

I do believe that RHEL 8 is supported by ELK 7.13.4

In terms of configuration I have the following:
Kibana --> Elasticsearch.host = "http://localhost:9200"
Elasticsearch.yml network.host= 0.0.0.0

I have not changed either of the server.basepath or server.port, but i did modify the server.host in the kibana.yml file to be the ip address of the kibana server it is installed on.

More information for the people following this thread. I have 3 individual RHEL 8 servers for my ELK SIEM stack. . One for each member of the stack, so that is to say, that I have 1 ES, 1 Kibana and 1 Logstash server. I have not started adding any security yet, as I want to make sure they communicate first before I start adding the xpack.security services.

i will edit the log level to the debug and see what we get after that.

Thank you all again.

After running in debug mode for 5 minutes and continually trying to hit and open the Kibana UI, here is what I find interesting in the kibana.log.

Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migration. followed the suggestion from the following link:

it didn't have an effect on my system, I am still receiving the same message about Kibana not being ready.

I also see the next line of "Unable to retrieve version information from Elasticsearch nodes". Followed by the next line : Stopping all plugins", continued with "Monitoring stats collection is stopped" and \eventlog"plugin didn't stop in 30sec., move on to the next."

I have been able to grab a makeshift copy of the Kib and ES .yml files. I have made them in my test environment with the same configuration, different names and IP addresses.

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["127.0.0.1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

Kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "192.168.56.8"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "CENTOS_Kibana"

# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: "http://0.0.0.0:9200"

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

I believe I am getting closer to resolving this, but would sill like some extra insight and eyes on the glass if that is possible. I have made minor changes to the .yml files I have removed the # in front of the server.host and server.port as well as the Elasticsearch.hosts and removed the quotation marks that surrounded the http://0.0.0.0:9200 in the Kibana.yml file.

In the Elasticsearch.yml file I have removed the # from the network.host, http.port and the discovery.seed_hosts.

Now when I run a curl to the Elasticsearch from either the Kibana box or the Elasticsearch box I recieve the proper response. If I run a curl from either of the two machines looking at the kibana I still recieve the " Kibana Server is not ready yet"
Current kibana log below.

{"type":"log","@timestamp":"2021-10-04T08:42:50-04:00","tags":["info","plugins-service"],"pid":1886,"message":"Plugin \"timelines\" is disabled."}
{"type":"log","@timestamp":"2021-10-04T08:42:51-04:00","tags":["warning","config","deprecation"],"pid":1886,"message":"\"logging.dest\" has been deprecated and will be removed in 8.0. To set the destination moving forward, you can use the \"console\" appender in your logging configuration or define a custom one. For more details, see https://github.com/elastic/kibana/blob/master/src/core/server/logging/README.mdx"}
{"type":"log","@timestamp":"2021-10-04T08:42:51-04:00","tags":["warning","config","deprecation"],"pid":1886,"message":"plugins.scanDirs is deprecated and is no longer used"}
{"type":"log","@timestamp":"2021-10-04T08:42:51-04:00","tags":["warning","config","deprecation"],"pid":1886,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""}
{"type":"log","@timestamp":"2021-10-04T08:42:52-04:00","tags":["info","plugins-system"],"pid":1886,"message":"Setting up [106] plugins: [code,taskManager,licensing,globalSearch,globalSearchProviders,banners,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,licenseApiGuard,translations,legacyExport,embeddable,uiActionsEnhanced,esUiShared,expressions,charts,bfetch,data,home,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,savedObjects,visualizations,visTypeVislib,visTypeVega,visTypeTagcloud,visTypeMetric,visTypeTimelion,features,licenseManagement,watcher,visTypeMarkdown,visTypeTable,visTypeXy,tileMap,regionMap,presentationUtil,canvas,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,indexPatternManagement,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,lens,reporting,lists,encryptedSavedObjects,dashboardMode,dataEnhanced,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,remoteClusters,crossClusterReplication,rollup,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,fileUpload,maps,fileDataVisualizer,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,observability,osquery,ml,securitySolution,cases,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-10-04T08:42:52-04:00","tags":["info","plugins","taskManager"],"pid":1886,"message":"TaskManager is identified by the Kibana UUID: aa0e8f7c-b59d-4aa7-9857-48024e0ae856"}
{"type":"log","@timestamp":"2021-10-04T08:42:56-04:00","tags":["warning","plugins","security","config"],"pid":1886,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T08:42:56-04:00","tags":["warning","plugins","security","config"],"pid":1886,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2021-10-04T08:42:57-04:00","tags":["warning","plugins","reporting","config"],"pid":1886,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T08:42:57-04:00","tags":["warning","plugins","reporting","config"],"pid":1886,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 7.9.2009 OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
{"type":"log","@timestamp":"2021-10-04T08:42:57-04:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":1886,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T08:42:57-04:00","tags":["warning","plugins","actions","actions"],"pid":1886,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T08:42:57-04:00","tags":["warning","plugins","alerting","plugins","alerting"],"pid":1886,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T08:43:06-04:00","tags":["info","plugins","monitoring","monitoring"],"pid":1886,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2021-10-04T08:43:07-04:00","tags":["info","savedobjects-service"],"pid":1886,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2021-10-04T08:43:07-04:00","tags":["error","savedobjects-service"],"pid":1886,"message":"Unable to retrieve version information from Elasticsearch nodes."}
{"type":"log","@timestamp":"2021-10-04T09:10:47-04:00","tags":["info","plugins-system"],"pid":1886,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2021-10-04T09:10:47-04:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1886,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2021-10-04T09:11:17-04:00","tags":["warning","plugins-system"],"pid":1886,"message":"\"eventLog\" plugin didn't stop in 30sec., move on to the next."}
{"type":"log","@timestamp":"2021-10-04T09:12:34-04:00","tags":["info","plugins-service"],"pid":4198,"message":"Plugin \"timelines\" is disabled."}
{"type":"log","@timestamp":"2021-10-04T09:12:35-04:00","tags":["warning","config","deprecation"],"pid":4198,"message":"\"logging.dest\" has been deprecated and will be removed in 8.0. To set the destination moving forward, you can use the \"console\" appender in your logging configuration or define a custom one. For more details, see https://github.com/elastic/kibana/blob/master/src/core/server/logging/README.mdx"}
{"type":"log","@timestamp":"2021-10-04T09:12:35-04:00","tags":["warning","config","deprecation"],"pid":4198,"message":"plugins.scanDirs is deprecated and is no longer used"}
{"type":"log","@timestamp":"2021-10-04T09:12:35-04:00","tags":["warning","config","deprecation"],"pid":4198,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""}
{"type":"log","@timestamp":"2021-10-04T09:12:36-04:00","tags":["info","plugins-system"],"pid":4198,"message":"Setting up [106] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,translations,licenseApiGuard,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,savedObjects,visualizations,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeMarkdown,visTypeXy,tileMap,regionMap,presentationUtil,canvas,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,indexPatternManagement,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,fileUpload,maps,fileDataVisualizer,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,observability,osquery,ml,securitySolution,cases,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-10-04T09:12:36-04:00","tags":["info","plugins","taskManager"],"pid":4198,"message":"TaskManager is identified by the Kibana UUID: aa0e8f7c-b59d-4aa7-9857-48024e0ae856"}
{"type":"log","@timestamp":"2021-10-04T09:12:38-04:00","tags":["warning","plugins","security","config"],"pid":4198,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T09:12:38-04:00","tags":["warning","plugins","security","config"],"pid":4198,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2021-10-04T09:12:39-04:00","tags":["warning","plugins","reporting","config"],"pid":4198,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T09:12:39-04:00","tags":["warning","plugins","reporting","config"],"pid":4198,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 7.9.2009 OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
{"type":"log","@timestamp":"2021-10-04T09:12:39-04:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":4198,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T09:12:39-04:00","tags":["warning","plugins","actions","actions"],"pid":4198,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T09:12:39-04:00","tags":["warning","plugins","alerting","plugins","alerting"],"pid":4198,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-10-04T09:12:47-04:00","tags":["info","plugins","monitoring","monitoring"],"pid":4198,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2021-10-04T09:12:49-04:00","tags":["info","savedobjects-service"],"pid":4198,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2021-10-04T09:12:49-04:00","tags":["error","savedobjects-service"],"pid":4198,"message":"Unable to retrieve version information from Elasticsearch nodes."}

Hello Elastic company and Guru's. I am still fighting the good fight here but am not getting any further than where I was yesterday. If you have had a time to go through the logs and configurations I supplied any help is greatly appreciated.

Thank you in advance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.