Change HTTP SSL security without private key of CA

We just installed ELK stack v8.5.1 on RHEL linux server and the elasticsearch is using the generated certs and I can generate the enrollment token for my Kibana to connect. However, we want to use our corporate internal certificate for the HTTP SSL (I am not working on the transport SSL). These SSL certificates has SAN listing the server's hostnames, but not IP addresses. These certificates all are signed by intermediate CA (used by our department) which is in turn signed by the corporate root CA (used by entire company). And the most important thing here to note is, we do not have access to the private key of either intermediate CA or root CA, as they are held by the cyber security team.

So I have 4 things (item 1-4):

  1. root CA cert, no private key
  2. intermediate CA, no private key, signed by item1 above
  3. Elasticsearch server SSL cert, signed by item 2 above
  4. private key of Elasticsearch server SSL cert (no password protected)

The goal to get kibana connected to Elasticsearch but I am struggling to get these certificates working.

What I have tried:

  • replaced the http.p12 keystore file with individual settings http.ssl.key=item4, http.ssl.certificate=item3 and http.ssl.certificate_authorities=item2. But when I tried to generate kibana enrollment token, it complains that Elasticsearch node HTTP layer SSL configuration is not configured with a keystore
  • Kept using individual settings but manually configure kibana. Generated a service token using bin/elasticsearch-service-tokens create elastic/kibana my-token then use the token in setting elasticsearch.serviceAccountToken in kibana.yml file. Also in the same file, update elasticsearch.ssl.certificateAuthorities to point to item 2. Then kibana log complains that Unable to retrieve version information from Elasticsearch nodes. unable to get issuer certificate
  • created another keystore file to replace the http.p12 and updated the ssl keystore secure password accordingly. The new keystore file contains item2, item3 and item4 using this command openssl pkcs12 -export -in item3 -inkey item4 -certfile item2 -out new_keystore_file.p12. But when generating kibana enrollment token, it complains Elasticsearch node HTTP layer SSL configuration Keystore doesn't contain any PrivateKey entries where the associated certificate is a CA certificate, but when I exam the p12 file content using openssl pkcs12 -in example.p12 -info, it clearly displays the cert and the private key.

I am out of ideas here. I know the official document is asking us to use the certutil tool to generate the SSL certificates, but that requires CA cert and its private key. That will not happen due to security reason. Could anyone shed some lights here? Thanks in advance

Your 1st and 3rd attempts are not currently supported. But the 2nd try should have worked. For troubleshooting, please share:

  1. elasticsearch.yml
  2. kibana.yml
  3. Elasticsearch server logs when the error happens.

You can also manually ensure SSL is working correctly with Elasticsearch before configuring Kibana. For example, try cURL Elasticsearch with something like

curl --cacert ITEM_2 https://ES_ADDRESS:9200

If SSL works, you should get a JSON response of authentication error.

Hi, thanks for your reply.

Please find the files below. For the elasticsearch.log, it did not produce any log when this problem is happening. But the kibana log does show Unable to retrieve version information from Elasticsearch nodes. unable to get issuer certificate

I tried your curl command on both elasticsearch and kibana servers, and got the same result: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}},"status":401}

I also tried the 2nd attempt from my original post but with the elasticsearch generated keystore file http.p12, the service account token works... So now I am thinking if there is specific requirements for the cert we can used?

elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#network.host: nortinuels02
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 18-04-2023 02:12:59
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
#  keystore.path: certs/http.p12

  key: /etc/elasticsearch/certs/npd-wemelk-elasticsearch.key
  certificate: /etc/elasticsearch/certs/npd-wemelk-elasticsearch.cer
  certificate_authorities: ["/etc/elasticsearch/certs/intermediate.cer"]

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["nortinuels02.mycompany.local"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

kibana.yml

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000


# This section was automatically generated during setup.
elasticsearch.hosts: ['https://10.20.106.161:9200']
#elasticsearch.hosts: ['https://nortinuels02:9200']
#elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2ODE4NTk5ODYzMjg6NjJPNFRYdTRUOGFMTGdhSFFLSHpkUQ
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL25vcnRpbnVraWIwMS10b2tlbjp1YXVqSjhMV1JTeVVYOHB1bFNtTDBn
#elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1681859987263.crt]
elasticsearch.ssl.certificateAuthorities: [/etc/kibana/Certificates/intermediate.cer]
#xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://10.20.106.161:9200'], ca_trusted_fingerprint: 04c69db3694322a227eb0015bd1163d35d930cc83b9fddfae94cfee8b5e992c0}]
#xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://nortinuels02:9200'], ca_trusted_fingerprint: 04c69db3694322a227eb0015bd1163d35d930cc83b9fddfae94cfee8b5e992c0}]

I also tried

  • disabling the http.ssl
  • then generated a new service token for elastic/kibana principle
  • hit the ES server via http no https with following command
curl -H "Authorization: Bearer MY_SERVICE_TOKEN" http://ES_IP:9200/_security/_authenticate

then I get this error

"reason":"unable to authenticate with provided credentials and anonymous access is not allowed for this request","additional_unsuccessful_credentials":"oauth2 token: invalid token"

How to generate a valid service token?

How did you generate the service token? If it is generated using the CLI, it is local to the node. That is, you can only authenticate with it on that specific ES node. If you send it to another ES node, you will encounter the 401 error.

This means SSL is workiong properly since the error is from the application layer. The error is expected since no credentials were provided by the curl command. It means the --cacert you specified for the curl command is correct and you should use it for configuring Kibana's elasticsearch.ssl.certificateAuthorities.

No, Service account has nothing to do with what certificate you use for HTTP SSL.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.