Hello,
I just had a quick question on Kibana and Elasticsearch. I am getting what seems to be a pretty common error. "Kibana server is not ready yet". This happened after I followed this tutorial https://www.elastic.co/guide/en/elasticsearch/reference/7.16/security-basic-setup-https.html#encrypt-kibana-elasticsearch
I understand that this error means that there is trouble establishing a connection to Elasticsearch but I don't see why it would be happening.
Here is my Elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: General-SecurityCluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: CRUD-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: ElasticSearch's IP
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["Kibana's IP", "Elasticsearch's IP", "Logstash's IP"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Security ---------------------------------
#
# Enables Elastic Security. Set this value to true or false
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/http.p12
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Here is my Kibana.yml:
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "Password For kibana_system"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid
# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid
# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid
# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
#
# --------------------------------------------- Security --------------------------------------------
#
elasticsearch.ssl.certificateAuthorities: /PathToMyCA/
server.ssl.enabled: true
server.ssl.key: /PathToMyKey/
server.ssl.certificate: /PathToMyCertificate/
#
# ---------------------------------------------------------------------------------------------------
Here are the Kibana logs. These were generated after deleting the log file and letting it regenerate itself:
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["info","plugins-service"],"pid":1092,"message":"Plugin \"metricsEntities\" is disabled."}
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["warning","config","deprecation"],"pid":1092,"message":"\"logging.dest\" has been deprecated and will be removed in 8.0. To set the destination mo
ving forward, you can use the \"console\" appender in your logging configuration or define a custom one. For more details, see https://github.com/elastic/kibana/blob/master/src/core/server/logging/README.mdx"}
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["warning","config","deprecation"],"pid":1092,"message":"plugins.scanDirs is deprecated and is no longer used"}
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["warning","config","deprecation"],"pid":1092,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required
for email notifications to work in 8.0.\""}
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["warning","config","deprecation"],"pid":1092,"message":"\"xpack.reporting.roles\" is deprecated. Granting reporting privilege through a \"reportin
g_user\" role will not be supported starting in 8.0. Please set \"xpack.reporting.roles.enabled\" to \"false\" and grant reporting privileges to users using Kibana application privileges **Management > Securit
y > Roles**."}
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["info","http","server","NotReady"],"pid":1092,"message":"http server running at https://Kibana'sIP"}
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["info","plugins-system"],"pid":1092,"message":"Setting up [106] plugins: [translations,taskManager,licensing,globalSearch,globalSearchProviders,ba
nners,licenseApiGuard,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,screenshotMode,telemetry,newsfeed,mapsEms,mapsLegacy,legacyExp
ort,kibanaLegacy,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,savedObjects,visualizations,visTypeXy,visTypeVislib,visTypeTimelion,features,visTypeTagcloud,visTypeTable,visTypePie,visT
ypeMetric,visTypeMarkdown,tileMap,regionMap,presentationUtil,timelion,home,searchprofiler,painlessLab,grokdebugger,graph,visTypeVega,management,watcher,licenseManagement,indexPatternManagement,advancedSettings
,discover,discoverEnhanced,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,savedObjectsManagement,spaces,security,transform,savedObjectsTagging,lens,reporting,canvas,lists,ingestPipelines,fileUpload,ma
ps,dataVisualizer,encryptedSavedObjects,dataEnhanced,timelines,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,
enterpriseSearch,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,osquery,ml,cases,securitySolution,observability,uptime,infra,monitoring,logstash,console,apmOss,apm]"}
{"type":"log","@timestamp":"2022-01-04T00:45:02+00:00","tags":["info","plugins","taskManager"],"pid":1092,"message":"TaskManager is identified by the Kibana UUID: 016f584d-9d74-443d-b2dd-851925d9b93a"}
{"type":"log","@timestamp":"2022-01-04T00:45:04+00:00","tags":["warning","plugins","security","config"],"pid":1092,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from
being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2022-01-04T00:45:04+00:00","tags":["warning","plugins","reporting","config"],"pid":1092,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions fro
m being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2022-01-04T00:45:04+00:00","tags":["info","plugins","reporting","config"],"pid":1092,"message":"Chromium sandbox provides an additional layer of protection, and is supported for Lin
ux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox."}
{"type":"log","@timestamp":"2022-01-04T00:45:04+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":1092,"message":"Saved objects encryption key is not set. This will severely limit Kibana functi
onality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2022-01-04T00:45:04+00:00","tags":["warning","plugins","actions","actions"],"pid":1092,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption
key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2022-01-04T00:45:04+00:00","tags":["warning","plugins","alerting","plugins","alerting"],"pid":1092,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing
encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2022-01-04T00:45:07+00:00","tags":["info","plugins","ruleRegistry"],"pid":1092,"message":"Write is disabled, not installing assets"}
{"type":"log","@timestamp":"2022-01-04T00:45:08+00:00","tags":["info","savedobjects-service"],"pid":1092,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved object
s migrations..."}
{"type":"log","@timestamp":"2022-01-04T00:45:08+00:00","tags":["error","savedobjects-service"],"pid":1092,"message":"Unable to retrieve version information from Elasticsearch nodes. Hostname/IP does not match
certificate's altnames: IP: ElasticsearchesIP is not in the cert's list: Kibana'sIP, OtherNetworkingIP, OtherNetworkingIP"}
This error seems pretty informative except for the fact that I don't know how to edit the IP's in the cert's list. Is this something that has to be done while you make the certificate using the CertUtil tool? Or am I missing something simple. For context, our cert was generated on our Elasticsearch machine using the certutil and then sent through scp to our kibana machine.
Thanks In Advance,
Jared