For improved security I want to make the login process use a client certificate that has been generated by using our own root certificate. This also makes sure that when Kibana is loaded, no (probably weak) password needs to be entered.
Kibana is running with SSL enabled and the connection is using https.
I am running a local test instance of Kibana, ElasticSearch and X-Pack before I deploy it on our server.
Right now I can login with username and password, but I can't enable client certificate authentication.
On the local test machine SSL (https) is disabled, do I need to enable it for the client certificate authentication to work?
Edit: We are switching to Let's Encrypt Certificates
On the local test machine SSL (https) is disabled, do I need to enable it for the client certificate authentication to work?
in your original post.
Yes, TLS/SSL is required as mentioned explicitly in the documentation. If a PKI realm was configured, with TLS disabled, you would get a
ERROR: [1] bootstrap checks failed
[1]: a PKI realm is enabled but cannot be used as neither HTTP or Transport have SSL and client authentication enabled
Also, I am not entirely sure in which context you mention Let's Encrypt certificates, but please note that :
Even if Let's Encrypt would offer personal certificates, it would not be prudent to allow anyone with a certificate signed from Let's Encrypt CAs authenticate to your stack
I'll try it out then.
I am using Let's Encrypt for the SSL/TLS and a custom CA for my client certificates. The previous CA wasn't working at all, so I might have some luck. I'll try it later today
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.ssl.key: ssl/dashweb.eu/privkey1.pem #Lets encrypt
xpack.ssl.certificate: ssl/dashweb.eu/fullchain1.pem #Lets encrypt
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.client_authentication: "required"
#xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.client_authentication: "required"
xpack.security.audit.enabled: true
xpack:
security:
authc:
realms:
pki1:
type: pki
certificate_authorities: "trusted" #PEM file that includes the accepted certificate
enabled: true
#xpack.ssl.certificate_authorities: [
#"ssl/dashweb.eu/chain1.pem",
#"ssl/dashweb.eu/chain2.pem"]
When opening my Kibana, it still asks for a username and password.
Do I need to configure something in order for the client certificate to work?
I apologize but it turns out that my last couple of posts might have been misleading. You cannot use X-Pack PKI authentication in order to authenticate end users to Kibana. Elasticsearch currently is the only product that supports PKI as end user authentication method.
So in summary, what is currently supported :
Users can authenticate to Elasticsearch directly with a client certificate via their browser
Kibana can authenticate to Elasticsearch using a client certificate, instead of username/password combination in kibana.yml
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.