Kibana not connecting after https setup

I have enabled https on elastic search and now kibana cannot connect.

      log   [15:28:25.571] [error][elasticsearch] Request error, retrying
    GET https://ukee-exp004:9200/_xpack?accept_enterprise=true => connect ECONNREFUSED 10.130.40.43:9200
      log   [15:28:26.603] [warning][elasticsearch] Unable to revive connection: https://ukee-exp004:9200/
      log   [15:28:26.605] [warning][elasticsearch] No living connections
      log   [15:28:26.609] [warning][licensing][plugins] License information could not be obtained from Elasticsearch due to Error: No Living connections error

I am running on Windows 64 bit server and followed the instructions at Configure security for the Elastic Stack | Elasticsearch Guide [7.x] | Elastic

In brief, I auto generated the elastic passwords and enabled the https.

The steps in creating the http certificate are below:

    C:\Program Files\Elastic\Elasticsearch\7.12.1>.\bin\elasticsearch-certutil.bat http

    ## Elasticsearch HTTP Certificate Utility

    The 'http' command guides you through the process of generating certificates
    for use on the HTTP (Rest) interface for Elasticsearch.

    This tool will ask you a number of questions in order to generate the right
    set of files for your needs.

    ## Do you wish to generate a Certificate Signing Request (CSR)?

    A CSR is used when you want your certificate to be created by an existing
    Certificate Authority (CA) that you do not control (that is, you don't have
    access to the keys for that CA).

    If you are in a corporate environment with a central security team, then you
    may have an existing Corporate CA that can generate your certificate for you.
    Infrastructure within your organisation may already be configured to trust this
    CA, so it may be easier for clients to connect to Elasticsearch if you use a
    CSR and send that request to the team that controls your CA.

    If you choose not to generate a CSR, this tool will generate a new certificate
    for you. That certificate will be signed by a CA under your control. This is a
    quick and easy way to secure your cluster with TLS, but you will need to
    configure all your clients to trust that custom CA.

    Generate a CSR? [y/N]n

    ## Do you have an existing Certificate Authority (CA) key-pair that you wish to use to sign your certificate?

    If you have an existing CA certificate and key, then you can use that CA to
    sign your new http certificate. This allows you to use the same CA across
    multiple Elasticsearch clusters which can make it easier to configure clients,
    and may be easier for you to manage.

    If you do not have an existing CA, one will be generated for you.

    Use an existing CA? [y/N]y

    ## What is the path to your CA?

    Please enter the full pathname to the Certificate Authority that you wish to
    use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS
    (.jks) or PEM (.crt, .key, .pem) format.
    CA Path: C:\ProgramData\Elastic\Elasticsearch\config\elastic-stack-ca.p12
    Reading a PKCS12 keystore requires a password.
    It is possible for the keystore's password to be blank,
    in which case you can simply press <ENTER> at the prompt
    Password for elastic-stack-ca.p12:

    ## How long should your certificates be valid?

    Every certificate has an expiry date. When the expiry date is reached clients
    will stop trusting your certificate and TLS connections will fail.

    Best practice suggests that you should either:
    (a) set this to a short duration (90 - 120 days) and have automatic processes
    to generate a new certificate before the old one expires, or
    (b) set it to a longer duration (3 - 5 years) and then perform a manual update
    a few months before it expires.

    You may enter the validity period in years (e.g. 3Y), months (e.g. 18M), or days (e.g. 90D)

    For how long should your certificate be valid? [5y] 5y

    ## Do you wish to generate one certificate per node?

    If you have multiple nodes in your cluster, then you may choose to generate a
    separate certificate for each of these nodes. Each certificate will have its
    own private key, and will be issued for a specific hostname or IP address.

    Alternatively, you may wish to generate a single certificate that is valid
    across all the hostnames or addresses in your cluster.

    If all of your nodes will be accessed through a single domain
    (e.g. node01.es.example.com, node02.es.example.com, etc) then you may find it
    simpler to generate one certificate with a wildcard hostname (*.es.example.com)
    and use that across all of your nodes.

    However, if you do not have a common domain name, and you expect to add
    additional nodes to your cluster in the future, then you should generate a
    certificate per node so that you can more easily generate new certificates when
    you provision new nodes.

    Generate a certificate per node? [y/N]y

    ## What is the name of node #1?

    This name will be used as part of the certificate file name, and as a
    descriptive name within the certificate.

    You can use any descriptive name that you like, but we recommend using the name
    of the Elasticsearch node.

    node #1 name: UKEE-EXP004

    ## Which hostnames will be used to connect to UKEE-EXP004?

    These hostnames will be added as "DNS" names in the "Subject Alternative Name"
    (SAN) field in your certificate.

    You should list every hostname and variant that people will use to connect to
    your cluster over http.
    Do not list IP addresses here, you will be asked to enter them later.

    If you wish to use a wildcard certificate (for example *.es.example.com) you
    can enter that here.

    Enter all the hostnames that you need, one per line.
    When you are done, press <ENTER> once more to move on to the next step.

    10.130.40.43

    You entered the following hostnames.

     - 10.130.40.43

    Is this correct [Y/n]y

    ## Which IP addresses will be used to connect to UKEE-EXP004?

    If your clients will ever connect to your nodes by numeric IP address, then you
    can list these as valid IP "Subject Alternative Name" (SAN) fields in your
    certificate.

    If you do not have fixed IP addresses, or not wish to support direct IP access
    to your cluster then you can just press <ENTER> to skip this step.

    Enter all the IP addresses that you need, one per line.
    When you are done, press <ENTER> once more to move on to the next step.

    10.130.40.43

    You entered the following IP addresses.

     - 10.130.40.43

    Is this correct [Y/n]y

    ## Other certificate options

    The generated certificate will have the following additional configuration
    values. These values have been selected based on a combination of the
    information you have provided above and secure defaults. You should not need to
    change these values unless you have specific requirements.

    Key Name: UKEE-EXP004
    Subject DN: CN=UKEE-EXP004
    Key Size: 2048

    Do you wish to change any of these options? [y/N]n
    Generate additional certificates? [Y/n]n

    ## What password do you want for your private key(s)?

    Your private key(s) will be stored in a PKCS#12 keystore file named "http.p12".
    This type of keystore is always password protected, but it is possible to use a
    blank password.

    If you wish to use a blank password, simply press <enter> at the prompt below.
    Provide a password for the "http.p12" file:  [<ENTER> for none]
    Repeat password to confirm:

    ## Where should we save the generated files?

    A number of files will be generated including your private key(s),
    public certificate(s), and sample configuration options for Elastic Stack products.

    These files will be included in a single zip archive.

    What filename should be used for the output zip file? [C:\Program Files\Elastic\Elasticsearch\7.12.1\elasticsearch-ssl-http.zip]

    Zip file written to C:\Program Files\Elastic\Elasticsearch\7.12.1\elasticsearch-ssl-http.zip

    C:\Program Files\Elastic\Elasticsearch\7.12.1>.\bin\elasticsearch-keystore.bat add xpack.security.http.ssl.keystore.secure_password
    Enter value for xpack.security.http.ssl.keystore.secure_password:

kibana.yaml settings are:

    # The URLs of the Elasticsearch instances to use for all your queries.
    elasticsearch.hosts: ["https://UKEE-EXP004:9200"]

    # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
    # dashboards. Kibana creates a new index if the index doesn't already exist.
    #kibana.index: ".kibana"

    # The default application to load.
    #kibana.defaultAppId: "home"

    # If your Elasticsearch is protected with basic authentication, these settings provide
    # the username and password that the Kibana server uses to perform maintenance on the Kibana
    # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
    # is proxied through the Kibana server.
    elasticsearch.username: "kibana_system"
    elasticsearch.password: "FpxxxxxxxxzZ"

    # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
    # These settings enable SSL for outgoing requests from the Kibana server to the browser.
    #server.ssl.enabled: false
    #server.ssl.certificate: /path/to/your/server.crt
    #server.ssl.key: /path/to/your/server.key

    # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
    # These files are used to verify the identity of Kibana to Elasticsearch and are required when
    # xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
    #elasticsearch.ssl.certificate: /path/to/your/client.crt
    #elasticsearch.ssl.key: /path/to/your/client.key

    # Optional setting that enables you to specify a path to the PEM file for the certificate
    # authority for your Elasticsearch instance.
    elasticsearch.ssl.certificateAuthorities: [ "D:/kibana/kibana-7.12.1-windows-x86_64/config/elasticsearch-ca.pem" ]

Elasticsearch.yml

bootstrap.memory_lock: false
cluster.name: elasticsearch2
http.port: 9200
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: UKEE-EXP004
path.data: D:\ProgramData\Elastic\Elasticsearch\data
path.logs: D:\ProgramData\Elastic\Elasticsearch\logs
transport.tcp.port: 9300
xpack.license.self_generated.type: basic
xpack.security.enabled: true
discovery.type: single-node
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: http.p12

Please help!

Have you also set xpack.security.enabled to True in kibana.yml? The license information error is usually a common symptom of this.

Hi,

Thanks for the reply.

No. this was not in the kibana.yml file. Also it is not mentioned in the documentation, only for the elasticsearch.yml (unless I missed it but I have just gone back and searched it).

Regardless it hasn't fixed the issue. Thanks for trying.

Did it change anything in the error message or it's the same?

Exactly the same. I have also noticed that if I restore those both yml files from first install backups, I still can't connect. I could in the past before attempting to setup http security.

 log   [20:05:07.463] [error][elasticsearch] Request error, retrying
GET https://10.130.40.43:9200/_xpack?accept_enterprise=true => connect ECONNREFUSED 10.130.40.43:9200
  log   [20:05:08.476] [warning][elasticsearch] Unable to revive connection: https://10.130.40.43:9200/
  log   [20:05:08.478] [warning][elasticsearch] No living connections
  log   [20:05:08.483] [warning][licensing][plugins] License information could not be obtained from Elasticsearch due to Error: No Living connections error
  log   [20:05:08.489] [warning][monitoring][monitoring][plugins] X-Pack Monitoring Cluster Alerts will not be available: No Living connections

kibana.yml

elasticsearch.hosts: ["https://10.130.40.43:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "FptvtABgnFrKCmvHRuzZ"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
xpack.security.enabled: true

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "config/elasticsearch-ca.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
elasticsearch.ssl.verificationMode: none

backed up elasticsearch.yml doesn't work now it did before attempting https setup:

bootstrap.memory_lock: false
cluster.name: elasticsearch
http.port: 9200
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: UKEE-EXP004
path.data: D:\ProgramData\Elastic\Elasticsearch\data
path.logs: D:\ProgramData\Elastic\Elasticsearch\logs
transport.tcp.port: 9300
xpack.license.self_generated.type: basic
xpack.security.enabled: false

Disabling the xpack.security and using the browser on the local machine, elastic search connects as http://localhost:9200 but using the hostname or ip address, even on the same machine, nothing! :frowning: I have no idea...

I seem to be making significant progress by adding this to the elasticsearch.yml

network.host: 10.130.40.43

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.