Kibana Unable to retrieve version information from Elasticsearch nodes

Hi all, I have a three member cluster version 7.16.1.
my elasticsearch.yml config is as following:


# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: cluster_security
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["10.0.0.53", "10.0.0.230","10.0.0.118"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2","node-3"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12 
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: http.p12

I create the elasticsearch-ssl-http.zip file as following:

1- on node 10.0.0.53, following command has be executed:
elasticsearch-certutil http

D:\ELK\elasticsearch-7.16\elasticsearch-7.16.1\bin>elasticsearch-certutil http

## Elasticsearch HTTP Certificate Utility

The 'http' command guides you through the process of generating certificates
for use on the HTTP (Rest) interface for Elasticsearch.

This tool will ask you a number of questions in order to generate the right
set of files for your needs.

## Do you wish to generate a Certificate Signing Request (CSR)?

A CSR is used when you want your certificate to be created by an existing
Certificate Authority (CA) that you do not control (that is, you don't have
access to the keys for that CA).

If you are in a corporate environment with a central security team, then you
may have an existing Corporate CA that can generate your certificate for you.
Infrastructure within your organisation may already be configured to trust this
CA, so it may be easier for clients to connect to Elasticsearch if you use a
CSR and send that request to the team that controls your CA.

If you choose not to generate a CSR, this tool will generate a new certificate
for you. That certificate will be signed by a CA under your control. This is a
quick and easy way to secure your cluster with TLS, but you will need to
configure all your clients to trust that custom CA.

Generate a CSR? [y/N]n

## Do you have an existing Certificate Authority (CA) key-pair that you wish to
use to sign your certificate?

If you have an existing CA certificate and key, then you can use that CA to
sign your new http certificate. This allows you to use the same CA across
multiple Elasticsearch clusters which can make it easier to configure clients,
and may be easier for you to manage.

If you do not have an existing CA, one will be generated for you.

Use an existing CA? [y/N]y

## What is the path to your CA?

Please enter the full pathname to the Certificate Authority that you wish to
use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS
(.jks) or PEM (.crt, .key, .pem) format.
CA Path: D:\ELK\elasticsearch-7.16\elasticsearch-7.16.1\elastic-stack-ca.p12
Reading a PKCS12 keystore requires a password.
It is possible for the keystore's password to be blank,
in which case you can simply press <ENTER> at the prompt
Password for elastic-stack-ca.p12:

## How long should your certificates be valid?

Every certificate has an expiry date. When the expiry date is reached clients
will stop trusting your certificate and TLS connections will fail.

Best practice suggests that you should either:
(a) set this to a short duration (90 - 120 days) and have automatic processes
to generate a new certificate before the old one expires, or
(b) set it to a longer duration (3 - 5 years) and then perform a manual update
a few months before it expires.

You may enter the validity period in years (e.g. 3Y), months (e.g. 18M), or days
 (e.g. 90D)

For how long should your certificate be valid? [5y] 20y

## Do you wish to generate one certificate per node?

If you have multiple nodes in your cluster, then you may choose to generate a
separate certificate for each of these nodes. Each certificate will have its
own private key, and will be issued for a specific hostname or IP address.

Alternatively, you may wish to generate a single certificate that is valid
across all the hostnames or addresses in your cluster.

If all of your nodes will be accessed through a single domain
(e.g. node01.es.example.com, node02.es.example.com, etc) then you may find it
simpler to generate one certificate with a wildcard hostname (*.es.example.com)
and use that across all of your nodes.

However, if you do not have a common domain name, and you expect to add
additional nodes to your cluster in the future, then you should generate a
certificate per node so that you can more easily generate new certificates when
you provision new nodes.

Generate a certificate per node? [y/N]n

## Which hostnames will be used to connect to your nodes?

These hostnames will be added as "DNS" names in the "Subject Alternative Name"
(SAN) field in your certificate.

You should list every hostname and variant that people will use to connect to
your cluster over http.
Do not list IP addresses here, you will be asked to enter them later.

If you wish to use a wildcard certificate (for example *.es.example.com) you
can enter that here.

Enter all the hostnames that you need, one per line.
When you are done, press <ENTER> once more to move on to the next step.


You did not enter any hostnames.
Clients are likely to encounter TLS hostname verification errors if they
connect to your cluster using a DNS name.

Is this correct [Y/n]y

## Which IP addresses will be used to connect to your nodes?

If your clients will ever connect to your nodes by numeric IP address, then you
can list these as valid IP "Subject Alternative Name" (SAN) fields in your
certificate.

If you do not have fixed IP addresses, or not wish to support direct IP access
to your cluster then you can just press <ENTER> to skip this step.

Enter all the IP addresses that you need, one per line.
When you are done, press <ENTER> once more to move on to the next step.


You did not enter any IP addresses.

Is this correct [Y/n]y

## Other certificate options

The generated certificate will have the following additional configuration
values. These values have been selected based on a combination of the
information you have provided above and secure defaults. You should not need to
change these values unless you have specific requirements.

Key Name: elasticsearch
Subject DN: CN=elasticsearch
Key Size: 2048

Do you wish to change any of these options? [y/N]n

## What password do you want for your private key(s)?

Your private key(s) will be stored in a PKCS#12 keystore file named "http.p12".
This type of keystore is always password protected, but it is possible to use a
blank password.

If you wish to use a blank password, simply press <enter> at the prompt below.
Provide a password for the "http.p12" file:  [<ENTER> for none]
Repeat password to confirm:

## Where should we save the generated files?

A number of files will be generated including your private key(s),
public certificate(s), and sample configuration options for Elastic Stack produc
ts.

These files will be included in a single zip archive.

What filename should be used for the output zip file? [D:\ELK\elasticsearch-7.16
\elasticsearch-7.16.1\elasticsearch-ssl-http.zip]

Zip file written to D:\ELK\elasticsearch-7.16\elasticsearch-7.16.1\elasticsearch
-ssl-http.zip

D:\ELK\elasticsearch-7.16\elasticsearch-7.16.1\bin>elasticsearch-keystore add xp
ack.security.http.ssl.keystore.secure_password
Setting xpack.security.http.ssl.keystore.secure_password already exists. Overwri
te? [y/N]y
Enter value for xpack.security.http.ssl.keystore.secure_password:

D:\ELK\elasticsearch-7.16\elasticsearch-7.16.1\bin>

Then I copy http.p12 to config folder of three nodes. Calling elasticserach in URL results as below (https://10.0.0.53:9200/_cluster/health?):

{
"cluster_name": "cluster_security",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 14,
"active_shards": 28,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100
}

Then I copy elasticsearch-ca.pem to kibana config folder and kibana.yml is as following:

# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://10.0.0.230:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "elastic"
elasticsearch.password: "***"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
elasticsearch.ssl.certificateAuthorities: C:\ELK\kibana-7.13.1-windows-x86_64\config\elasticsearch-ca.pem

Starting Kibana leads to following error:

[error][savedobjects-service] Unable to retrieve version information from Elasticsearch nodes.

and url shows that

kibana server is not ready yet

.

Any advise will be so appreciated

Perhaps

Wrap paths in single quotation marks

Windows paths in particular sometimes contain spaces or characters, such as drive letters or triple dots, that may be misinterpreted by the YAML parser.

To avoid this problem, it’s a good idea to wrap paths in single quotation marks.

What do the rest of the kibana logs look like I suspect it is not connecting to elasticsearch.

Many thanks for your reply
I copy Elasticsearch-ca.pem to bin folder and use following setting:
elasticsearch.ssl.certificateAuthorities: "elasticsearch-ca.pem"
but still kibana log is as following:

  log   [19:58:42.346] [warning][config][plugins][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [19:58:42.347] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
  log   [19:58:42.410] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [19:58:42.418] [info][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, and is supported for Win32 OS. Automatically enabling Chromium sandbox.
  log   [19:58:42.419] [warning][encryptedSavedObjects][plugins] Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [19:58:42.524] [warning][actions][actions][plugins] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [19:58:42.535] [warning][alerting][alerting][plugins][plugins] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [19:58:42.613] [info][monitoring][monitoring][plugins] config sourced from: production cluster
  log   [19:58:42.840] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
  log   [19:58:44.222] [error][savedobjects-service] Unable to retrieve version information from Elasticsearch nodes.

I is noted that, I am sending data from metricbeat to ELK and by checking indices, (https://10.0.0.53:9200/_cat/indices) the index has been created. Notebally, the same "Elasticsearch-ca.pem" has been copied to metricbeaty folder and config is as following:

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://10.0.0.53:9200","https://10.0.0.230:9200","https://10.0.0.118:9200"]
  #protocol: "https"
  indices:
   - index: "metric-%{+yyyy.MM}"
     

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "***"
  ssl:
    certificate_authorities: ["elasticsearch-ca.pem"]
    verification_mode: "certificate"

Either use the correct syntax or out it in the correct directory

Copy the elasticsearch-ca.pem file to the Kibana configuration directory, as defined by the $KBN_PATH_CONF path

Open kibana.yml and add the following line to specify the location of the security certificate for the HTTP layer.

elasticsearch.ssl.certificateAuthorities: $KBN_PATH_CONF/elasticsearch-ca.pem

Personally I think the full path with the single quotes is a better approach.

I activate kibana logging and result is as following:

  log   [20:19:08.936] [error][savedobjects-service] Unable to retrieve version information from Elasticsearch nodes.
  log   [20:19:09.426] [debug][status] Recalculated overall status
  log   [20:19:10.429] [debug][data][elasticsearch][query] [ConnectionError]: Hostname/IP does not match certificate's altnames: IP: 10.0.0.118 is not in the cert's list:
  log   [20:19:11.455] [debug][ops][metrics] memory: 135.1MB uptime: 0:00:20 load: [0.00,0.00,0.00] delay: 5.650
  log   [20:19:12.895] [debug][data][elasticsearch][query] [ConnectionError]: Hostname/IP does not match certificate's altnames: IP: 10.0.0.53 is not in the cert's list:
  log   [20:19:15.427] [debug][data][elasticsearch][query] [ConnectionError]: Hostname/IP does not match certificate's altnames: IP: 10.0.0.230 is not in the cert's list:
  log   [20:19:16.462] [debug][ops][metrics] memory: 136.6MB uptime: 0:00:25 load: [0.00,0.00,0.00] delay: 5.578

But this issue hasn't seen in metricbeat case.

Also, elasticserach log is following where 10.0.0.17 is the IP of server having kibana:

`

[2021-12-19T19:36:44,645][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] http client did not trust this server's certificate, closing connection Netty4HttpChannel{localAddress=/10.0.0.53:9200, remoteAddress=/10.0.0.17:53981}

`

Yes good it's finding the cert

But This is a SSL Certificate issue.

You are connecting to

elasticsearch.hosts: ["https://10.0.0.230:9200"]

And as the error says it is not part of the certificate, you will need to add those IPs when you create the cert if you want to use the CA without issue as configured OR see below.

From above... Looks like you did not enter any IPs during the process.

Enter all the IP addresses that you need, one per line.
When you are done, press once more to move on to the next step.

You did not enter any IP addresses.

It works with metricbeat beacuse you set

verification_mode: "certificate"

certificate

Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.

You can do the same with Kibana look at the settings here

elasticsearch.ssl.verificationMode:

Controls the verification of the server certificate that Kibana receives when making an outbound SSL/TLS connection to Elasticsearch. Valid values are "full" , "certificate" , and "none" . Using "full" performs hostname verification, using "certificate" skips hostname verification, and using "none" skips verification entirely. Default: "full"

1 Like

Dear Stephen
Many thanks for your reply. It has been ok by setting below in kibana.yml:

elasticsearch.ssl.verificationMode: certificate

Actually, I did not set any IP because the client lists are not clear and maybe new client (filebeat,metricbeat,kibana,logstash, ..) wants to connect to Elasticsearch.

1 Like

Dear @stephenb
Can I ask you a question?
In the above case, I did not enter a specific hostname for certificate and it is issued to Elasticsearch. Actually I create Elasticsearch-ssl-http.zip in one node without entering any hostname and copy http.p12 to config folders of other Elasticsearch nodes and copy Elasticsearch-ca.pem to kibana, logstash , .. folders of tools wanted to connect to Elasticsearch. Now, I want to issued it to specific host and create a zip file for each Elasticsearch that issued to its hostname


1. When asked if you want to generate a CSR, enter `n` .
2. When asked if you want to use an existing CA, enter `y` .
3. Enter the path to your CA. This is the absolute path to the `elastic-stack-ca.p12` file that you generated for your cluster.
4. Enter the password for your CA.
5. Enter an expiration value for your certificate. You can enter the validity period in years, months, or days. For example, enter `90D` for 90 days.
6. When asked if you want to generate one certificate per node, enter `y` .Each certificate will have its own private key, and will be issued for a specific hostname or IP address.

The question is, I have three Elasticsearch nodes, so three Elasticsearch-ssl-http.zip will be created which each one issued to its host name. After that I want to copy elasticsearch-ca.pem to the kibanaT,logstash and metricbeat folders. Are these files same in all three created zip files in Elasticsearch nodes?

Any advise will be so appreciated

Regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.