How to configurate Kibana with client certificate authentication

For improved security I want to make the login process use a client certificate that has been generated by using our own root certificate. This also makes sure that when Kibana is loaded, no (probably weak) password needs to be entered.
Kibana is running with SSL enabled and the connection is using https.
I am running a local test instance of Kibana, ElasticSearch and X-Pack before I deploy it on our server.
Right now I can login with username and password, but I can't enable client certificate authentication.
On the local test machine SSL (https) is disabled, do I need to enable it for the client certificate authentication to work?

Edit: We are switching to Let's Encrypt Certificates

Hi @timkoers,

Have you gone through the relevant part of the documentation ? Is there a specific issue or difficulty you are dealing with ?

Yes I've done that.
Locally, I've done that, and I still get the user/password login.

Hi, I just noticed the

On the local test machine SSL (https) is disabled, do I need to enable it for the client certificate authentication to work?

in your original post.

Yes, TLS/SSL is required as mentioned explicitly in the documentation. If a PKI realm was configured, with TLS disabled, you would get a

ERROR: [1] bootstrap checks failed
[1]: a PKI realm is enabled but cannot be used as neither HTTP or Transport have SSL and client authentication enabled

Also, I am not entirely sure in which context you mention Let's Encrypt certificates, but please note that :

  • Let's Encrypt only offers Domain Validation certificates, so those can't be used for client authentication
  • Even if Let's Encrypt would offer personal certificates, it would not be prudent to allow anyone with a certificate signed from Let's Encrypt CAs authenticate to your stack

I'll try it out then.
I am using Let's Encrypt for the SSL/TLS and a custom CA for my client certificates. The previous CA wasn't working at all, so I might have some luck. I'll try it later today

This is my elasticsearch.yml file

# ======================== Elasticsearch Configuration =========================
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
# Please consult the documentation for further information on configuration options:
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
# my-application
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
# node-1
# Add custom attributes to the node:
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
# /path/to/data
# Path to log files:
#path.logs: /path/to/logs
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
#bootstrap.memory_lock: true
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
# Elasticsearch performs poorly when the system is swapping the memory.
# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
# Set a custom port for HTTP:
http.port: 9200
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["", "[::1]"]
# ["host1", "host2"]
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#discovery.zen.minimum_master_nodes: 3
# For more information, consult the zen discovery module documentation.
# ---------------------------------- Gateway -----------------------------------
# Block initial recovery after a full cluster restart until N nodes are started:
#gateway.recover_after_nodes: 3
# For more information, consult the gateway module documentation.
# ---------------------------------- Various -----------------------------------
# Require explicit names when deleting indices:
#action.destructive_requires_name: true

xpack.ssl.key: ssl/ #Lets encrypt
xpack.ssl.certificate: ssl/ #Lets encrypt true "required" true "required" true

               type: pki
               certificate_authorities: "trusted" #PEM file that includes the accepted certificate
               enabled: true

#xpack.ssl.certificate_authorities: [

When opening my Kibana, it still asks for a username and password.
Do I need to configure something in order for the client certificate to work?

Hi @timkoers

I apologize but it turns out that my last couple of posts might have been misleading. You cannot use X-Pack PKI authentication in order to authenticate end users to Kibana. Elasticsearch currently is the only product that supports PKI as end user authentication method.

So in summary, what is currently supported :

  • Users can authenticate to Elasticsearch directly with a client certificate via their browser
  • Kibana can authenticate to Elasticsearch using a client certificate, instead of username/password combination in kibana.yml

I got it!


Might be a nice addition to the documentation :yum:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.