Setting up self managed ELK stack with TLS/HTTPS issue

I’m trying to set up an ELK stack for SIEM doing a standard install. I installed Elasticsearch and Kibana, which worked fine using HTTP, but when I tried to set up TLS using a self-signed certificate from our CA, I can’t access the webpage. I get ERR_CONNECTION_REFUSED. I followed the directions on elastic.co. What am I missing? Here are my .yml files.

Elasticsearch.yml

# ======================== Elasticsearch Configuration =========================

#

# NOTE: Elasticsearch comes with reasonable defaults for most settings.

#       Before you set out to tweak and tune the configuration, make sure you

#       understand what are you trying to accomplish and the consequences.

#

# The primary way of configuring a node is via this file. This template lists

# the most important settings you may want to configure for a production cluster.

#

# Please consult the documentation for further information on configuration options:

# 


#

# ---------------------------------- Cluster -----------------------------------

#

# Use a descriptive name for your cluster:

#

#cluster.name: my-application

#

# ------------------------------------ Node ------------------------------------

#

# Use a descriptive name for the node:

#

#node.name: node-1

#

# Add custom attributes to the node:

#

#node.attr.rack: r1

#

# ----------------------------------- Paths ------------------------------------

#

# Path to directory where to store the data (separate multiple locations by comma):

#

path.data: /var/lib/elasticsearch

#

# Path to log files:

#

path.logs: /var/log/elasticsearch

#

# ----------------------------------- Memory -----------------------------------

#

# Lock the memory on startup:

#

#bootstrap.memory_lock: true

#

# Make sure that the heap size is set to about half the memory available

# on the system and that the owner of the process is allowed to use this

# limit.

#

# Elasticsearch performs poorly when the system is swapping the memory.

#

# ---------------------------------- Network -----------------------------------

#

# By default Elasticsearch is only accessible on localhost. Set a different

# address here to expose this node on the network:

#

network.host: siem.ncics.org

#

# By default Elasticsearch listens for HTTP traffic on the first free port it

# finds starting at 9200. Set a specific HTTP port here:

#

http.port: 9200

#

# For more information, consult the network module documentation.

#

# --------------------------------- Discovery ----------------------------------

#

# Pass an initial list of hosts to perform discovery when this node is started:

# The default list of hosts is ["127.0.0.1", "[::1]"]

#

#discovery.seed_hosts: ["host1", "host2"]

#

# Bootstrap the cluster using an initial set of master-eligible nodes:

#

#cluster.initial_master_nodes: ["node-1", "node-2"]

#

# For more information, consult the discovery and cluster formation module documentation.

#

# ---------------------------------- Various -----------------------------------

#

# Allow wildcard deletion of indices:

#

#action.destructive_requires_name: false

 #----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------

#

# The following settings, TLS certificates, and keys have been automatically      

# generated to configure Elasticsearch security features on 10-02-2026 19:06:40

#

# --------------------------------------------------------------------------------



# Enable security features

xpack.security.enabled: true



xpack.security.enrollment.enabled: true



# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents

xpack.security.http.ssl:

  enabled: true

  keystore.path: certs/http.p12



# Enable encryption and mutual authentication between cluster nodes

xpack.security.transport.ssl:

  enabled: true

  verification_mode: certificate

  keystore.path: certs/transport.p12

  truststore.path: certs/transport.p12

# Create a new cluster with the current node only

# Additional nodes can still join the cluster later

cluster.initial_master_nodes: ["siem.ncics.org"]



# Allow HTTP API connections from anywhere

# Connections are encrypted and require user authentication

http.host: 0.0.0.0



# Allow other nodes to join the cluster from anywhere

# Connections are encrypted and mutually authenticated

#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

kibana.yml:

# For more configuration options see the configuration guide for Kibana in

# 




# =================== System: Kibana Server ===================

# Kibana is served by a back end server. This setting specifies the port to use.

server.port: 5601



# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.

# The default is 'localhost', which usually means remote machines will not be able to connect.

# To allow connections from remote users, set this parameter to a non-loopback address.

server.host: siem.ncics.org



# Enables you to specify a path to mount Kibana at if you are running behind a proxy.

# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath

# from requests it receives, and to prevent a deprecation warning at startup.

# This setting cannot end in a slash.

#server.basePath: ""



# Specifies whether Kibana should rewrite requests that are prefixed with

# `server.basePath` or require that they are rewritten by your reverse proxy.

# Defaults to `false`.

#server.rewriteBasePath: false



# Specifies the public URL at which Kibana is available for end users. If

# `server.basePath` is configured this URL should end with the same basePath.

#server.publicBaseUrl: ""



# The maximum payload size in bytes for incoming server requests.

#server.maxPayload: 1048576



# The Kibana server's name. This is used for display purposes.

#server.name: "your-hostname"



# =================== System: Kibana Server (Optional) ===================

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.

# These settings enable SSL for outgoing requests from the Kibana server to the browser.

server.ssl.enabled: true

server.ssl.certificate: /etc/elasticsearch/certs/siem_ncics_org.pem

server.ssl.key: /etc/elasticsearch/certs/siem_ncics_org.key



# =================== System: Elasticsearch ===================

# The URLs of the Elasticsearch instances to use for all your queries.

#elasticsearch.hosts: https://siem.ncics.org:9200



# If your Elasticsearch is protected with basic authentication, these settings provide

# the username and password that the Kibana server uses to perform maintenance on the Kibana

# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which

# is proxied through the Kibana server.

#elasticsearch.username: "kibana_system"

#elasticsearch.password: "pass"



# Kibana can also authenticate to Elasticsearch via "service account tokens".

# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.

# Use this token instead of a username/password.

# elasticsearch.serviceAccountToken: "my_token"



# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of

# the elasticsearch.requestTimeout setting.

#elasticsearch.pingTimeout: 1500



# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value

# must be a positive integer.

#elasticsearch.requestTimeout: 30000



# The maximum number of sockets that can be used for communications with elasticsearch.

# Defaults to `800`.

#elasticsearch.maxSockets: 1024



# Specifies whether Kibana should use compression for communications with elasticsearch

# Defaults to `false`.

#elasticsearch.compression: false



# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side

# headers, set this value to [] (an empty list).

#elasticsearch.requestHeadersWhitelist: [ authorization ]



# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten

# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.

#elasticsearch.customHeaders: {}



# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.

#elasticsearch.shardTimeout: 30000



# =================== System: Elasticsearch (Optional) ===================

# These files are used to verify the identity of Kibana to Elasticsearch and are required when

# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.

elasticsearch.ssl.certificate: /etc/elasticsearch/certs/siem_ncics_org.pem

elasticsearch.ssl.key: /etc/elasticsearch/certs/siem_ncics_org.key



# Enables you to specify a path to the PEM file for the certificate

# authority for your Elasticsearch instance.

#elasticsearch.ssl.certificateAuthorities: /etc/elasticsearch/certs/siem_ncics_org.pem 



# To disregard the validity of SSL certificates, change this setting's value to 'none'.

#elasticsearch.ssl.verificationMode: full



# =================== System: Logging ===================

# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'

#logging.root.level: debug



# Enables you to specify a file where Kibana stores log output.

logging:

  appenders:

    file:

      type: file

      fileName: /var/log/kibana/kibana.log

      layout:

        type: json

  root:

    appenders:

      - default

      - file

#  policy:

#    type: size-limit

#    size: 256mb

#  strategy:

#    type: numeric

#    max: 10

#  layout:

#    type: json



# Logs queries sent to Elasticsearch.

#logging.loggers:

#  - name: elasticsearch.query

#    level: debug



# Logs http responses.

#logging.loggers:

#  - name: http.server.response

#    level: debug



# Logs system usage information.

#logging.loggers:

#  - name: metrics.ops

#    level: debug



# Enables debug logging on the browser (dev console)

#logging.browser.root:

#  level: debug



# =================== System: Other ===================

# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data

#path.data: data



# Specifies the path where Kibana creates the process ID file.

pid.file: /run/kibana/kibana.pid



# Set the interval in milliseconds to sample system and process performance

# metrics. Minimum is 100ms. Defaults to 5000ms.

#ops.interval: 5000



# Specifies locale to be used for all localizable strings, dates and number formats.

# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".

#i18n.locale: "en"



# =================== Frequently used (Optional)===================



# =================== Saved Objects: Migrations ===================

# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.



# The number of documents migrated at a time.

# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,

# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.

#migrations.batchSize: 1000



# The maximum payload size for indexing batches of upgraded saved objects.

# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.

# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`

# configuration option. Default: 100mb

#migrations.maxBatchSizeBytes: 100mb



# The number of times to retry temporary migration failures. Increase the setting

# if migrations fail frequently with a message such as `Unable to complete the [...] step after

# 15 attempts, terminating`. Defaults to 15

#migrations.retryAttempts: 15



# =================== Search Autocomplete ===================

# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.

# This value must be a whole number greater than zero. Defaults to 1000ms

#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000



# Maximum number of documents loaded by each shard to generate autocomplete suggestions.

# This value must be a whole number greater than zero. Defaults to 100_000

#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000





# This section was automatically generated during setup.

elasticsearch.hosts: [https://siem.ncics.org:9200]

elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3NzA3NTI2NTQxMzU6RWJMNXVFVmpSby1EQTFKNVZ2V2JtZw

elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1770752655547.crt]

xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: [https://siem.ncics.org:9200], ca_trusted_fingerprint: c4c11ff74cc53f9ed066752cf8da5ed4ee8371820116abcd12d6e90b7727f845}]


Hi @BenNCSU welcome to the community

I suspect your cert does not have the SAN siem.ncics.org and thus ssl certificate validation is failing.

You can check that by running the following and share the result

curl -k -v -u elastic https://siem.ncics.org:9200

Please share the whole command in the output and we can take a look

Thanks. Here is the output:

Enter host password for user 'elastic':

* Host siem.ncics.org:9200 was resolved.

* IPv6: 2610:28:a000:0:10:0:1:225

* IPv4: 10.0.1.225

*   Trying [2610:28:a000:0:10:0:1:225]:9200...

* Connected to siem.ncics.org (2610:28:a000:0:10:0:1:225) port 9200

* ALPN: curl offers h2,http/1.1

* TLSv1.3 (OUT), TLS handshake, Client hello (1):

* TLSv1.3 (IN), TLS handshake, Server hello (2):

* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):

* TLSv1.3 (IN), TLS handshake, Certificate (11):

* TLSv1.3 (IN), TLS handshake, CERT verify (15):

* TLSv1.3 (IN), TLS handshake, Finished (20):

* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):

* TLSv1.3 (OUT), TLS handshake, Finished (20):

* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS

* ALPN: server did not agree on a protocol. Uses default.

* Server certificate:

*  subject: CN=siem.ncics.org

*  start date: Feb 10 19:06:45 2026 GMT

*  expire date: Feb 10 19:06:45 2028 GMT

*  issuer: CN=Elasticsearch security auto-configuration HTTP CA

*  SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway.

*   Certificate level 0: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption

*   Certificate level 1: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption

* using HTTP/1.x

* Server auth using Basic with user 'elastic'

> GET / HTTP/1.1

> Host: siem.ncics.org:9200

> Authorization: Basic ZWxhc3RpYzp5b18qbU1VM1NzSnAqQzRtNUY3dw==

> User-Agent: curl/8.5.0

> Accept: */*

>

* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):

< HTTP/1.1 200 OK

< X-elastic-product: Elasticsearch

< content-type: application/json

< content-length: 540

<

{

"name" : "siem.ncics.org",

"cluster_name" : "elasticsearch",

"cluster_uuid" : "41vPsw_bRfq3yMVHJYVdWg",

"version" : {

"number" : "8.19.11",

"build_flavor" : "default",

"build_type" : "deb",

"build_hash" : "c5253e1bcb0268a5dafed9dee18e16fd3144d7d6",

"build_date" : "2026-01-28T22:06:09.337243873Z",

"build_snapshot" : false,

"lucene_version" : "9.12.2",

"minimum_wire_compatibility_version" : "7.17.0",

"minimum_index_compatibility_version" : "7.0.0"

},

"tagline" : "You Know, for Search"

}

* Connection #0 to host siem.ncics.org left intact

Is this when you attempt to access Kibana?

If so, then you are likely to get useful information from the Kibana logs. Depending on how you installed Kibana they will be in one of these locations:

  • $KIBANA_HOME/logs
  • /var/log/kibana
  • docker logs

From the looks of it, my guess is you installed an OS package for Elasticsearch and Kibana (a .rpm or .deb package). If so, then Kibana and Elasticsearch are running under different uids. In that case, this:

Is probably failing because the Kibana operating system account doesn't have permission to read from /etc/elasticsearch/
If that's the case, then it should be clear in the logs.

1 Like

Correct. I followed the install doc from Elastic.

Elasticsearch

Kibana

I don’t see anything stating that in the Kibana log. Just INFO inputs.

{"ecs":{"version":"8.11.0"},"@timestamp":"2026-02-25T09:26:33.651-05:00","message":"Kibana is starting","log":{"level":"INFO","logger":"root"},"process":{"pid":803454,"uptime":3.396100136}}

{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2026-02-25T09:26:33.689-05:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":803454,"uptime":3.414451798},"trace":{"id":"c5c7dcd632ba3b9906b7ac9a2d1a94dd"},"transaction":{"id":"6454611cd0771570"}}


root@siem:~# id elasticsearch

uid=110(elasticsearch) gid=110(elasticsearch) groups=110(elasticsearch)

root@siem:~# id kibana

uid=111(kibana) gid=111(kibana) groups=111(kibana)

Is the Kibana process running? You may need to check systemd logs instead.

No, you are right. I found this in the syslog:

2026-02-26T08:22:45.878696-05:00 siem kibana[859789]:  FATAL  Error: EACCES: permission denied, open '/etc/elasticsearch/certs/siem_ncics_org.key'

Ok, I figured that issue out. The group wasn’t set to elasticsearch on the cert files. I fixed that by adding the elasticsearch group to my cert files and adding the elasticsearch group to the kibana user. I got Kibana to start correctly at first, but then it errors out:

Nothing new in the Kibana log. When I go to siem.ncics.org I get the ERR_CONNECTION_REFUSED, but when I go to siem.ncics.org:9200, I get this:

{
  "name": "siem.ncics.org",
  "cluster_name": "elasticsearch",
  "cluster_uuid": "41vPsw_bRfq3yMVHJYVdWg",
  "version": {
    "number": "8.19.11",
    "build_flavor": "default",
    "build_type": "deb",
    "build_hash": "c5253e1bcb0268a5dafed9dee18e16fd3144d7d6",
    "build_date": "2026-01-28T22:06:09.337243873Z",
    "build_snapshot": false,
    "lucene_version": "9.12.2",
    "minimum_wire_compatibility_version": "7.17.0",
    "minimum_index_compatibility_version": "7.0.0"
  },
  "tagline": "You Know, for Search"
}

But now, when I check the status of Kibana, it says this:

root@siem:/etc/elasticsearch/certs# systemctl status kibana

× kibana.service - Kibana

     Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; preset: enabled)

     Active: failed (Result: exit-code) since Thu 2026-02-26 08:34:17 EST; 5min ago

   Duration: 12.913s

       Docs: https://www.elastic.co

    Process: 860506 ExecStart=/usr/share/kibana/bin/kibana (code=exited, status=1/FAILURE)

   Main PID: 860506 (code=exited, status=1/FAILURE)

        CPU: 14.326s



Feb 26 08:34:17 siem.ncics.org systemd[1]: kibana.service: Scheduled restart job, restart counter is at 3.

Feb 26 08:34:17 siem.ncics.org systemd[1]: kibana.service: Start request repeated too quickly.

Feb 26 08:34:17 siem.ncics.org systemd[1]: kibana.service: Failed with result 'exit-code'.

Feb 26 08:34:17 siem.ncics.org systemd[1]: Failed to start kibana.service - Kibana.

Feb 26 08:34:17 siem.ncics.org systemd[1]: kibana.service: Consumed 14.326s CPU time.


This is in the syslog:

2026-02-26T08:59:23.623011-05:00 siem systemd[1]: Starting elasticsearch.service - Elasticsearch...2026-02-26T08:59:55.155746-05:00 siem systemd[1]: Started elasticsearch.service - Elasticsearch.2026-02-26T08:59:55.184511-05:00 siem systemd[1]: Started kibana.service - Kibana.2026-02-26T08:59:55.438920-05:00 siem kibana[862157]: Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/8.19/production.html#openssl-legacy-provider2026-02-26T08:59:58.285766-05:00 siem kibana[862157]: {"log.level":"info","@timestamp":"2026-02-26T13:59:58.284Z","log.logger":"elastic-apm-node","ecs.version":"8.10.0","agentVersion":"4.13.0","env":{"pid":862157,"proctitle":"/usr/share/kibana/bin/../node/glibc-217/bin/node","os":"linux 6.8.0-100-generic","arch":"x64","host":"siem.ncics.org","timezone":"UTC-0500","runtime":"Node.js v22.22.0"},"config":{"active":{"source":"start","value":true},"breakdownMetrics":{"source":"start","value":false},"captureBody":{"source":"start","value":"off","commonName":"capture_body"},"captureHeaders":{"source":"start","value":false},"centralConfig":{"source":"start","value":false},"contextPropagationOnly":{"source":"start","value":true},"environment":{"source":"start","value":"production"},"globalLabels":{"source":"start","value":[["kibana_uuid","263eb9df-53b6-4f3d-8a4f-ec9fc6b7fec1"],["git_rev","c14722b56e3d34d5203bd311e91f9ec49227b044"]],"sourceValue":{"kibana_uuid":"263eb9df-53b6-4f3d-8a4f-ec9fc6b7fec1","git_rev":"c14722b56e3d34d5203bd311e91f9ec49227b044"}},"logLevel":{"source":"default","value":"info","commonName":"log_level"},"metricsInterval":{"source":"start","value":120,"sourceValue":"120s"},"serverUrl":{"source":"start","value":"https://kibana-cloud-apm.apm.us-east-1.aws.found.io/","commonName":"server_url"},"transactionSampleRate":{"source":"start","value":0.1,"commonName":"transaction_sample_rate"},"captureSpanStackTraces":{"source":"start","sourceValue":false},"secretToken":{"source":"start","value":"[REDACTED]","commonName":"secret_token"},"serviceName":{"source":"start","value":"kibana","commonName":"service_name"},"serviceVersion":{"source":"start","value":"8.19.11","commonName":"service_version"}},"activationMethod":"require","message":"Elastic APM Node.js Agent v4.13.0"}2026-02-26T08:59:58.704009-05:00 siem kibana[862157]: Native global console methods have been overridden in production environment.2026-02-26T09:00:00.260139-05:00 siem systemd[1]: Starting sysstat-collect.service - system activity accounting tool...2026-02-26T09:00:00.272573-05:00 siem systemd[1]: sysstat-collect.service: Deactivated successfully.2026-02-26T09:00:00.272748-05:00 siem systemd[1]: Finished sysstat-collect.service - system activity accounting tool.2026-02-26T09:00:00.991779-05:00 siem kibana[862157]: [2026-02-26T09:00:00.971-05:00][INFO ][root] Kibana is starting2026-02-26T09:00:01.009174-05:00 siem kibana[862157]: [2026-02-26T09:00:01.008-05:00][INFO ][node] Kibana process configured with roles: [background_tasks, ui]2026-02-26T09:00:10.682654-05:00 siem kibana[862157]:  FATAL  Error: error:1E08010C:DECODER routines::unsupported2026-02-26T09:00:10.714460-05:00 siem systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE2026-02-26T09:00:10.714995-05:00 siem systemd[1]: kibana.service: Failed with result 'exit-code'.2026-02-26T09:00:10.716251-05:00 siem systemd[1]: kibana.service: Consumed 14.824s CPU time.2026-02-26T09:00:13.753115-05:00 siem systemd[1]: kibana.service: Scheduled restart job, restart counter is at 1.


This is the cert directory:

root@siem:/etc/elasticsearch/certs# ls -al  /etc/elasticsearch/certs

total 44

drwxr-x--- 2 root elasticsearch   140 Feb 18 11:57 .

drwxr-s--- 4 root elasticsearch  4096 Feb 18 13:13 ..

-rw-rw---- 1 root elasticsearch  1939 Feb 10 14:06 http_ca.crt

-rw-rw---- 1 root elasticsearch 10109 Feb 10 14:06 http.p12

-rw-rw---- 1 root elasticsearch  1257 Feb 11 10:34 siem_ncics_org.csr

-rw-rw---- 1 root elasticsearch  7223 Feb 11 10:34 siem_ncics_org.key

-rw-rw---- 1 root elasticsearch  1627 Feb 11 10:34 siem_ncics_org.pem

-rw-rw---- 1 root elasticsearch  5838 Feb 10 14:06 transport.p12
2026-02-26T14:02:34.689196-05:00 siem (ic-agent)[10544]: elastic-agent.service: Changing to the requested working directory failed: No such file or directory
2026-02-26T14:02:34.691445-05:00 siem systemd[1]: elastic-agent.service: Main process exited, code=exited, status=200/CHDIR
2026-02-26T14:02:34.691615-05:00 siem systemd[1]: elastic-agent.service: Failed with result 'exit-code'.
2

Those logs look like they have to do with elastic agent not Kibana to me.
Perhaps you can run this command

After you start Kibana

journalctl -u kibana.service

And provide all the lines up until it stops.... It seems like perhaps your picking some log lines and we could be missing something.

1 Like

Restarting Kibana:

Mar 03 21:13:21 siem.ncics.org systemd[1]: Stopping kibana.service - Kibana...
░░ Subject: A stop job for unit kibana.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit kibana.service has begun execution.
░░ 
░░ The job identifier is 859075.
Mar 03 21:13:21 siem.ncics.org kibana[10705]: [2026-03-03T21:13:21.400-05:00][INFO ][root] SIGTERM received - initiating shutdown
Mar 03 21:13:21 siem.ncics.org kibana[10705]: [2026-03-03T21:13:21.402-05:00][INFO ][root] Kibana is shutting down
Mar 03 21:13:21 siem.ncics.org kibana[10705]: [2026-03-03T21:13:21.477-05:00][INFO ][plugins-system.standard] Stopping all plugins.
Mar 03 21:13:21 siem.ncics.org kibana[10705]: [2026-03-03T21:13:21.522-05:00][INFO ][plugins.securitySolution.endpoint:complete-external-response-actions] Un-registering task definition [endpoint:complete-externa>
Mar 03 21:13:21 siem.ncics.org kibana[10705]: [2026-03-03T21:13:21.541-05:00][INFO ][plugins.monitoring.monitoring.kibana-monitoring] Monitoring stats collection is stopped
Mar 03 21:13:21 siem.ncics.org kibana[10705]: [2026-03-03T21:13:21.572-05:00][INFO ][plugins.taskManager] Stopping the task poller
Mar 03 21:13:22 siem.ncics.org kibana[10705]: [2026-03-03T21:13:22.639-05:00][ERROR][plugins.taskManager] Deleting current node has failed. error: error attempting to authenticate request: security_exception
Mar 03 21:13:22 siem.ncics.org kibana[10705]:         Caused by:
Mar 03 21:13:22 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Mar 03 21:13:22 siem.ncics.org kibana[10705]:         Root causes:
Mar 03 21:13:22 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Mar 03 21:13:22 siem.ncics.org kibana[10705]: [2026-03-03T21:13:22.702-05:00][WARN ][plugins.taskManager] Poll interval configuration changing from 3000 to 61000 after seeing 1 "too many request" and/or "execute >
Mar 03 21:13:22 siem.ncics.org kibana[10705]: [2026-03-03T21:13:22.703-05:00][WARN ][plugins.taskManager] Poll interval configuration changing from 3000 to 61000 after seeing 1 "too many request" and/or "execute >
Mar 03 21:13:22 siem.ncics.org kibana[10705]: [2026-03-03T21:13:22.727-05:00][ERROR][plugins.eventLog] error writing bulk events: "security_exception
Mar 03 21:13:22 siem.ncics.org kibana[10705]:         Caused by:
Mar 03 21:13:22 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Mar 03 21:13:22 siem.ncics.org kibana[10705]:         Root causes:
Mar 03 21:13:22 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"; docs: [{"create":{}},{"@timestamp":"2026-03-04T02:13>
Mar 03 21:13:22 siem.ncics.org kibana[10705]: [2026-03-03T21:13:22.736-05:00][INFO ][plugins-system.standard] All plugins stopped.
Mar 03 21:13:23 siem.ncics.org kibana[10705]: [2026-03-03T21:13:23.087-05:00][WARN ][plugins.monitoring.monitoring.kibana-monitoring] ResponseError: security_exception
Mar 03 21:13:23 siem.ncics.org kibana[10705]:         Caused by:
Mar 03 21:13:23 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Mar 03 21:13:23 siem.ncics.org kibana[10705]:         Root causes:
Mar 03 21:13:23 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at KibanaTransport._request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:529:27)
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at processTicksAndRejections (node:internal/process/task_queues:105:5)
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at /usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:627:32
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at KibanaTransport.request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:623:20)
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/src/create_transport.js:60:16)
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at Monitoring.bulk (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/monitoring.js:59:16)
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at sendBulkPayload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/lib/send_bulk_payload.js:19:10)
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at BulkUploader._onPayload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/bulk_uploader.js:151:12)
Mar 03 21:13:23 siem.ncics.org kibana[10705]:     at BulkUploader._fetchAndUpload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/bulk_uploader.js:140:9)
Mar 03 21:13:23 siem.ncics.org kibana[10705]: [2026-03-03T21:13:23.090-05:00][WARN ][plugins.monitoring.monitoring.kibana-monitoring] Unable to bulk upload the stats payload to the local cluster
Mar 03 21:13:23 siem.ncics.org kibana[10705]: [2026-03-03T21:13:23.114-05:00][ERROR][plugins.taskManager] Failed to load list of active kibana nodes: error attempting to authenticate request: security_exception
Mar 03 21:13:23 siem.ncics.org kibana[10705]:         Caused by:
Mar 03 21:13:23 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Mar 03 21:13:23 siem.ncics.org kibana[10705]:         Root causes:
Mar 03 21:13:23 siem.ncics.org kibana[10705]:                 cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Mar 03 21:13:23 siem.ncics.org kibana[10705]: [2026-03-03T21:13:23.134-05:00][ERROR][plugins.taskManager] Failed to poll for work: There are no living connections
Mar 03 21:13:23 siem.ncics.org kibana[10705]: [2026-03-03T21:13:23.144-05:00][INFO ][plugins.taskManager] Task poller finished running its last cycle
Mar 03 21:13:23 siem.ncics.org systemd[1]: kibana.service: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit kibana.service has successfully entered the 'dead' state.
Mar 03 21:13:23 siem.ncics.org systemd[1]: Stopped kibana.service - Kibana.
░░ Subject: A stop job for unit kibana.service has finished
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A stop job for unit kibana.service has finished.
░░ 
░░ The job identifier is 859075 and the job result is done.
Mar 03 21:13:23 siem.ncics.org systemd[1]: kibana.service: Consumed 1h 6min 57.403s CPU time.
░░ Subject: Resources consumed by unit runtime
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit kibana.service completed and consumed the indicated resources.
Mar 03 21:13:23 siem.ncics.org systemd[1]: Started kibana.service - Kibana.
░░ Subject: A start job for unit kibana.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit kibana.service has finished successfully.
░░ 
░░ The job identifier is 859075.
Mar 03 21:13:24 siem.ncics.org kibana[52266]: Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/8.19/prod>
Mar 03 21:13:31 siem.ncics.org kibana[52266]: {"log.level":"info","@timestamp":"2026-03-04T02:13:31.419Z","log.logger":"elastic-apm-node","ecs.version":"8.10.0","agentVersion":"4.13.0","env":{"pid":52266,"proctit>
Mar 03 21:13:31 siem.ncics.org kibana[52266]: Native global console methods have been overridden in production environment.
Mar 03 21:13:36 siem.ncics.org kibana[52266]: [2026-03-03T21:13:36.499-05:00][INFO ][root] Kibana is starting
Mar 03 21:13:36 siem.ncics.org kibana[52266]: [2026-03-03T21:13:36.539-05:00][INFO ][node] Kibana process configured with roles: [background_tasks, ui]
Mar 03 21:13:55 siem.ncics.org kibana[52266]:  FATAL  Error: error:1E08010C:DECODER routines::unsupported
Mar 03 21:13:56 siem.ncics.org systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ An ExecStart= process belonging to unit kibana.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
Mar 03 21:13:56 siem.ncics.org systemd[1]: kibana.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit kibana.service has entered the 'failed' state with result 'exit-code'.
Mar 03 21:13:56 siem.ncics.org systemd[1]: kibana.service: Consumed 16.052s CPU time.
░░ Subject: Resources consumed by unit runtime
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit kibana.service completed and consumed the indicated resources.

So I fed your output into our internal LLM and this is what I got.

Short answer

There are two different things happening in this log, but the actual fatal error that kills Kibana is this one:

FATAL  Error: error:1E08010C:DECODER routines::unsupported

This error is caused by OpenSSL 3 rejecting a TLS key or certificate format / algorithm that Kibana is trying to load.
In practice, this almost always means an incompatible SSL key, certificate, or keystore (often legacy or encrypted with an unsupported algorithm).


What’s going on, step by step

1. Kibana was deliberately stopped

SIGTERM received - initiating shutdown

This is a clean shutdown, most commonly caused by:

  • systemctl restart kibana
  • Package upgrade
  • Configuration management (Ansible, Puppet, etc.)
  • Manual stop/start

:white_check_mark: This part is not an error.


2. Elasticsearch is unavailable during shutdown

During shutdown, Kibana logs a lot of:

cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized]
security_exception
There are no living connections

This means:

  • Elasticsearch was down or not fully started
  • The cluster state was not yet recovered
  • Kibana could not authenticate or write monitoring/task data

:warning: These are secondary symptoms, not the root cause.


3. Kibana restarts … then crashes immediately

After systemd starts Kibana again:

Kibana is currently running with legacy OpenSSL providers enabled!

Then:

FATAL  Error: error:1E08010C:DECODER routines::unsupported

:cross_mark: This is the real failure


Root cause (most important)

:white_check_mark: Invalid or unsupported TLS material under OpenSSL 3

Kibana (Node.js + OpenSSL 3) is trying to load one of the following, and OpenSSL cannot decode it:

  • server.ssl.key
  • server.ssl.certificate
  • elasticsearch.ssl.certificateAuthorities
  • A PKCS#1 private key instead of PKCS#8
  • A key encrypted with a legacy or unsupported cipher
  • A corrupted or non-PEM file

This matches exactly with the OpenSSL decoder error:

DECODER routines::unsupported

Why this started happening

Recent Kibana versions:

  • Use OpenSSL 3
  • Disable many legacy crypto algorithms by default
  • Are much stricter about key formats

Elastic explicitly documents this behavior change:

  • Legacy OpenSSL algorithms are no longer enabled by default
  • --openssl-legacy-provider is now required for old keys (temporary workaround)