Applying SSL to Elasticsearch and Kibana: Connection refused on Kibana

Yes, obviously everything is sanitized....

I started over just in case but same thing. Ill post everything again I guess:

curl --cacert /var/lib/kibana/ca_1713886821490.crt -v -u elastic:pass https://server:9200

*   Trying 172.20.5.100:9200...
* Connected to server (172.20.5.100) port 9200 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /var/lib/kibana/ca_1713886821490.crt
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS header, Unknown (21):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: self-signed certificate in certificate chain
* Closing connection 0
curl: (60) SSL certificate problem: self-signed certificate in certificate chain
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

curl -k --cacert /var/lib/kibana/ca_1713886821490.crt -v -u elastic:pass https://server:9200

*   Trying 172.20.5.100:9200...
* Connected to server (172.20.5.100) port 9200 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /var/lib/kibana/ca_1713886821490.crt
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Unknown (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=server
*  start date: May  6 16:03:14 2024 GMT
*  expire date: May  6 16:03:14 2074 GMT
*  issuer: CN=Elastic Certificate Tool Autogenerated CA
*  SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway.
* Server auth using Basic with user 'elastic'
* TLSv1.2 (OUT), TLS header, Unknown (23):
> GET / HTTP/1.1
> Host: server:9200
> Authorization: Basic ZWxhc3RpYzpFTEtQYTU1MjAyNCE=
> User-Agent: curl/7.76.1
> Accept: */*
>
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Unknown (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-elastic-product: Elasticsearch
< content-type: application/json
< content-length: 540
<
{
  "name" : "server",
  "cluster_name" : "cluster",
  "cluster_uuid" : "H2A9Q-BGTmuHlZBMYT9VFQ",
  "version" : {
    "number" : "8.13.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "16cc90cd2d08a3147ce02b07e50894bc060a4cbf",
    "build_date" : "2024-04-05T14:45:26.420424304Z",
    "build_snapshot" : false,
    "lucene_version" : "9.10.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
* Connection #0 to host server left intact

/etc/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: server
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elasticsearch/data
#
# Path to log files:
#
path.logs: /data/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
transport.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
# http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 23-04-2024 15:20:35
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
#xpack.security.http.ssl:
#  enabled: true
#  keystore.path: certs/http.p12
  

#xpack.security.http.ssl.keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
#xpack.security.transport.ssl:#
#  enabled: true
#  verification_mode: certificate
#  keystore.path: certs/transport.p12
#  truststore.path: certs/transport.p12
  
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
#xpack.security.transport.ssl.keystore.path: certs/transport.p12
#xpack.security.transport.ssl.truststore.path: certs/transport.p12
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12



xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
#xpack.security.transport.ssl.key: certs/server.key
#xpack.security.transport.ssl.certificate: certs/server.crt
#xpack.security.transport.ssl.certificate_authorities: certs/company-SRVDC1-CA.crt
 
 
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["server"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

/etc/kibana/kibana.yml

### >>>>>>> BACKUP START: Kibana interactive setup (2024-04-24T09:01:34.743Z)

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificate: /etc/elasticsearch/certs/server.crt
server.ssl.key: /etc/elasticsearch/certs/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["https://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana_pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
#logging:
#  appenders:
#    file:
#      type: file
#      fileName: /data/kibana/logs/kibana.log
#      layout:
#        type: json
#  root:
#    appenders:
#      - default
#      - file
#  policy:
#    type: size-limit
#    size: 256mb
#  strategy:
#    type: numeric
#    max: 10
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# Enables debug logging on the browser (dev console)
#logging.browser.root:
#  level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: /data/kibana/data

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000


# This section was automatically generated during setup.
#elasticsearch.hosts: ['https://server:9200']
#elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MTM4ODY4MjAxOTg6aUtDdzZETmdUWDJ5S1pDN2t2bDNtUQ
#elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1713886821490.crt]
#xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://server:9200'], ca_trusted_fingerprint: a49e593719cb3be3567593ade13f2330efcdc5f88753e30cac7864e16a7a19e6}]


### >>>>>>> BACKUP END: Kibana interactive setup (2024-04-24T09:01:34.743Z)

# This section was automatically generated during setup.
server.port: 5601
server.host: 0.0.0.0
logging.appenders.file.type: file
logging.appenders.file.fileName: /data/kibana/logs/kibana.log
logging.appenders.file.layout.type: json
logging.root.appenders: [default, file]
path.data: /data/kibana/data
pid.file: /run/kibana/kibana.pid
elasticsearch.hosts: ['https://server:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MTM5NDkyOTM0Nzc6VldPeXhnUzBSMEcyYWFTc3R2VFRZQQ
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://server:9200'], ca_trusted_fingerprint: a49e593719cb3be3567593ade13f2330efcdc5f88753e30cac7864e16a7a19e6}]

elasticsearch.ssl.certificateAuthorities: $KBN_PATH_CONF/elasticsearch-ca.pem

Log of Elastic when reboot Elastic

[2024-05-07T09:31:32,555][INFO ][o.e.n.NativeAccess       ] [server] Using [jdk] native provider and native methods for [Linux]
[2024-05-07T09:31:33,092][INFO ][o.a.l.i.v.PanamaVectorizationProvider] [server] Java vector incubator API enabled; uses preferredBitSize=256; FMA enabled
[2024-05-07T09:31:33,784][INFO ][o.e.n.Node               ] [server] version[8.13.2], pid[422829], build[rpm/16cc90cd2d08a3147ce02b07e50894bc060a4cbf/2024-04-05T14:45:26.420424304Z], OS[Linux/5.14.0-437.el9.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/21.0.2/21.0.2+13-58]
[2024-05-07T09:31:33,791][INFO ][o.e.n.Node               ] [server] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2024-05-07T09:31:33,791][INFO ][o.e.n.Node               ] [server] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=org.elasticsearch.preallocate, --enable-native-access=org.elasticsearch.nativeaccess, -XX:ReplayDataFile=/var/log/elasticsearch/replay_pid%p.log, -Des.distribution.type=rpm, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-13047233518909387646, --add-modules=jdk.incubator.vector, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,level,pid,tags:filecount=32,filesize=64m, -Xms15920m, -Xmx15920m, -XX:MaxDirectMemorySize=8346664960, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=25, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, --add-modules=ALL-MODULE-PATH, -Djdk.module.main=org.elasticsearch.server]
[2024-05-07T09:31:33,792][INFO ][o.e.n.Node               ] [server] Default Locale [en_US]
[2024-05-07T09:31:38,059][INFO ][o.e.p.PluginsService     ] [server] loaded module [repository-url]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [rest-root]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-core]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-redact]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [ingest-user-agent]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-async-search]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-monitoring]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [repository-s3]
[2024-05-07T09:31:38,060][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-analytics]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-ent-search]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-autoscaling]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [lang-painless]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-ml]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [lang-mustache]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [legacy-geo]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-ql]
[2024-05-07T09:31:38,061][INFO ][o.e.p.PluginsService     ] [server] loaded module [rank-rrf]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [analysis-common]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [health-shards-availability]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [transport-netty4]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [aggregations]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [ingest-common]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [frozen-indices]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-identity-provider]
[2024-05-07T09:31:38,062][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-text-structure]
[2024-05-07T09:31:38,063][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-shutdown]
[2024-05-07T09:31:38,068][INFO ][o.e.p.PluginsService     ] [server] loaded module [snapshot-repo-test-kit]
[2024-05-07T09:31:38,068][INFO ][o.e.p.PluginsService     ] [server] loaded module [ml-package-loader]
[2024-05-07T09:31:38,068][INFO ][o.e.p.PluginsService     ] [server] loaded module [kibana]
[2024-05-07T09:31:38,068][INFO ][o.e.p.PluginsService     ] [server] loaded module [constant-keyword]
[2024-05-07T09:31:38,068][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-logstash]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-graph]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-ccr]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-esql]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [parent-join]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [counted-keyword]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-enrich]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [repositories-metering-api]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [transform]
[2024-05-07T09:31:38,069][INFO ][o.e.p.PluginsService     ] [server] loaded module [repository-azure]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [repository-gcs]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [spatial]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [mapper-version]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [apm]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [mapper-extras]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-rollup]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [percolator]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [data-streams]
[2024-05-07T09:31:38,070][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-stack]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [reindex]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [rank-eval]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [systemd]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-security]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [blob-cache]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [searchable-snapshots]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-slm]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [snapshot-based-recoveries]
[2024-05-07T09:31:38,071][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-watcher]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [old-lucene-versions]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-ilm]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-voting-only-node]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-inference]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-deprecation]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-fleet]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-aggregate-metric]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-downsample]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-profiling]
[2024-05-07T09:31:38,072][INFO ][o.e.p.PluginsService     ] [server] loaded module [ingest-geoip]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-write-load-forecaster]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [search-business-rules]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [ingest-attachment]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [wildcard]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-apm-data]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-sql]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [unsigned-long]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [runtime-fields-common]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-async]
[2024-05-07T09:31:38,073][INFO ][o.e.p.PluginsService     ] [server] loaded module [vector-tile]
[2024-05-07T09:31:38,074][INFO ][o.e.p.PluginsService     ] [server] loaded module [lang-expression]
[2024-05-07T09:31:38,074][INFO ][o.e.p.PluginsService     ] [server] loaded module [x-pack-eql]
[2024-05-07T09:31:39,138][INFO ][o.e.e.NodeEnvironment    ] [server] using [1] data paths, mounts [[/data (172.20.5.101:/data)]], net usable_space [546.4gb], net total_space [589.5gb], types [nfs4]
[2024-05-07T09:31:39,139][INFO ][o.e.e.NodeEnvironment    ] [server] heap size [15.5gb], compressed ordinary object pointers [true]
[2024-05-07T09:31:40,998][INFO ][o.e.n.Node               ] [server] node name [server], node ID [MOFXJpsyRSi1h-vco7V2Qw], cluster name [cluster], roles [transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest, data_frozen, ml, data_hot]
[2024-05-07T09:31:46,059][INFO ][o.e.f.FeatureService     ] [server] Registered local node features [data_stream.rollover.lazy, desired_node.version_deprecated, features_supported, health.dsl.info, health.extended_repository_indicator, usage.data_tiers.precalculate_stats]
[2024-05-07T09:31:46,733][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [server] [controller/422853] [Main.cc@123] controller (64 bit): Version 8.13.2 (Build fdd7177d8c1325) Copyright (c) 2024 Elasticsearch BV
[2024-05-07T09:31:47,022][INFO ][o.e.t.a.APM              ] [server] Sending apm metrics is disabled
[2024-05-07T09:31:47,022][INFO ][o.e.t.a.APM              ] [server] Sending apm tracing is disabled
[2024-05-07T09:31:47,049][INFO ][o.e.x.s.Security         ] [server] Security is enabled
[2024-05-07T09:31:47,636][INFO ][o.e.x.s.a.s.FileRolesStore] [server] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2024-05-07T09:31:48,199][INFO ][o.e.x.s.InitialNodeSecurityAutoConfiguration] [server] Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot  determine if there is a terminal attached to the elasticsearch process. You can use the `bin/elasticsearch-reset-password` tool to set the password for the elastic user.
[2024-05-07T09:31:48,556][INFO ][o.e.x.w.Watcher          ] [server] Watcher initialized components at 2024-05-07T07:31:48.555Z
[2024-05-07T09:31:48,613][INFO ][o.e.x.p.ProfilingPlugin  ] [server] Profiling is enabled
[2024-05-07T09:31:48,638][INFO ][o.e.x.p.ProfilingPlugin  ] [server] profiling index templates will not be installed or reinstalled
[2024-05-07T09:31:48,642][INFO ][o.e.x.a.APMPlugin        ] [server] APM ingest plugin is disabled
[2024-05-07T09:31:49,235][INFO ][o.e.t.n.NettyAllocator   ] [server] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=8mb}]
[2024-05-07T09:31:49,264][INFO ][o.e.i.r.RecoverySettings ] [server] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2024-05-07T09:31:49,321][INFO ][o.e.d.DiscoveryModule    ] [server] using discovery type [multi-node] and seed hosts providers [settings]
[2024-05-07T09:31:51,252][INFO ][o.e.n.Node               ] [server] initialized
[2024-05-07T09:31:51,253][INFO ][o.e.n.Node               ] [server] starting ...
[2024-05-07T09:31:51,310][INFO ][o.e.x.s.c.f.PersistentCache] [server] persistent cache index loaded
[2024-05-07T09:31:51,311][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [server] deprecation component started
[2024-05-07T09:31:51,392][INFO ][o.e.t.TransportService   ] [server] publish_address {172.20.5.100:9300}, bound_addresses {0.0.0.0:9300}
[2024-05-07T09:31:53,705][INFO ][o.e.b.BootstrapChecks    ] [server] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2024-05-07T09:31:53,732][WARN ][o.e.c.c.ClusterBootstrapService] [server] this node is locked into cluster UUID [H2A9Q-BGTmuHlZBMYT9VFQ] but [cluster.initial_master_nodes] is set to [server]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see https://www.elastic.co/guide/en/elasticsearch/reference/8.13/important-settings.html#initial_master_nodes
[2024-05-07T09:31:53,968][INFO ][o.e.c.s.MasterService    ] [server] elected-as-master ([1] nodes joined in term 28)[_FINISH_ELECTION_, {server}{MOFXJpsyRSi1h-vco7V2Qw}{pGWMntcWSV6OneOIJAZY7w}{server}{172.20.5.100}{172.20.5.100:9300}{cdfhilmrstw}{8.13.2}{7000099-8503000} completing election], term: 28, version: 6067, delta: master node changed {previous [], current [{server}{MOFXJpsyRSi1h-vco7V2Qw}{pGWMntcWSV6OneOIJAZY7w}{server}{172.20.5.100}{172.20.5.100:9300}{cdfhilmrstw}{8.13.2}{7000099-8503000}]}
[2024-05-07T09:31:54,273][INFO ][o.e.c.s.ClusterApplierService] [server] master node changed {previous [], current [{server}{MOFXJpsyRSi1h-vco7V2Qw}{pGWMntcWSV6OneOIJAZY7w}{server}{172.20.5.100}{172.20.5.100:9300}{cdfhilmrstw}{8.13.2}{7000099-8503000}]}, term: 28, version: 6067, reason: Publication{term=28, version=6067}
[2024-05-07T09:31:54,326][INFO ][o.e.c.f.AbstractFileWatchingService] [server] starting file watcher ...
[2024-05-07T09:31:54,349][INFO ][o.e.h.AbstractHttpServerTransport] [server] publish_address {172.20.5.100:9200}, bound_addresses {0.0.0.0:9200}
[2024-05-07T09:31:54,353][INFO ][o.e.c.c.NodeJoinExecutor ] [server] node-join: [{server}{MOFXJpsyRSi1h-vco7V2Qw}{pGWMntcWSV6OneOIJAZY7w}{server}{172.20.5.100}{172.20.5.100:9300}{cdfhilmrstw}{8.13.2}{7000099-8503000}] with reason [completing election]
[2024-05-07T09:31:54,383][INFO ][o.e.c.f.AbstractFileWatchingService] [server] file settings service up and running [tid=57]
[2024-05-07T09:31:54,392][INFO ][o.e.n.Node               ] [server] started {server}{MOFXJpsyRSi1h-vco7V2Qw}{pGWMntcWSV6OneOIJAZY7w}{server}{172.20.5.100}{172.20.5.100:9300}{cdfhilmrstw}{8.13.2}{7000099-8503000}{ml.max_jvm_size=16693329920, ml.config_version=12.0.0, xpack.installed=true, transform.config_version=10.0.0, ml.machine_memory=33387294720, ml.allocated_processors=4, ml.allocated_processors_double=4.0}
[2024-05-07T09:31:54,436][INFO ][o.e.c.s.ClusterSettings  ] [server] updating [xpack.monitoring.collection.enabled] from [false] to [true]
[2024-05-07T09:31:54,758][WARN ][o.e.h.AbstractHttpServerTransport] [server] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/172.20.5.100:9200, remoteAddress=/172.22.40.11:21723}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:1583) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:365) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:287) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:297) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1353) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1295) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[?:?]
        ... 16 more
[2024-05-07T09:31:54,792][WARN ][o.e.h.n.Netty4HttpServerTransport] [server] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:52156}
[2024-05-07T09:31:56,528][INFO ][o.e.x.s.a.Realms         ] [server] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2024-05-07T09:31:56,550][INFO ][o.e.l.ClusterStateLicenseService] [server] license [3575729b-d36a-431e-bb91-0937d6539cf3] mode [basic] - valid
[2024-05-07T09:31:56,564][INFO ][o.e.g.GatewayService     ] [server] recovered [74] indices into cluster_state
[2024-05-07T09:31:57,754][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [server] Node [{server}{MOFXJpsyRSi1h-vco7V2Qw}] is selected as the current health node.
[2024-05-07T09:31:57,755][ERROR][o.e.i.g.GeoIpDownloader  ] [server] exception during geoip databases update
org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active
        at org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:131) ~[?:?]
        at org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:279) ~[?:?]
        at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:160) ~[?:?]
        at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:59) ~[?:?]
        at org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:34) ~[elasticsearch-8.13.2.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:984) ~[elasticsearch-8.13.2.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.13.2.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1583) ~[?:?]
[2024-05-07T09:31:58,809][INFO ][o.e.i.g.DatabaseNodeService] [server] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2024-05-07T09:31:59,088][INFO ][o.e.i.g.DatabaseNodeService] [server] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2024-05-07T09:31:59,831][WARN ][o.e.h.AbstractHttpServerTransport] [server] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/172.20.5.100:9200, remoteAddress=/172.22.40.11:21725}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:1583) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:365) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:287) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:297) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1353) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1295) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[?:?]
        ... 16 more
[2024-05-07T09:32:00,079][INFO ][o.e.i.g.DatabaseNodeService] [server] successfully loaded geoip database file [GeoLite2-City.mmdb]
[2024-05-07T09:32:03,270][WARN ][o.e.h.AbstractHttpServerTransport] [server] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/172.20.5.100:9200, remoteAddress=/172.20.5.100:60908}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:1583) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:365) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:287) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:297) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1353) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1295) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[?:?]
        ... 16 more

Log of Kibana when rebooting Kibana:

{"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-07T09:36:10.692+02:00","message":"Kibana is starting","log":{"level":"INFO","logger":"root"},"process":{"pid":424912,"uptime":2.122628285}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-07T09:36:10.794+02:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":424912,"uptime":2.171542503},"trace":{"id":"1c5ecba2b255b90d3c1be22e9169167d"},"transaction":{"id":"1733f50764ec1af6"}}
{"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-07T09:36:23.546+02:00","message":"Kibana is starting","log":{"level":"INFO","logger":"root"},"process":{"pid":425001,"uptime":1.9942578}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-07T09:36:23.625+02:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":425001,"uptime":2.029314273},"trace":{"id":"c4f28e9ab98ecff025beefe0838f137d"},"transaction":{"id":"7f3b9a59e4f8488b"}}
{"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-07T09:36:36.270+02:00","message":"Kibana is starting","log":{"level":"INFO","logger":"root"},"process":{"pid":425093,"uptime":1.963720852}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-07T09:36:36.342+02:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":425093,"uptime":1.992249077},"trace":{"id":"73994ff1323815ad23e02905de58d50f"},"transaction":{"id":"5a938c7ced64165f"}}

Except the Kibana HTTPS to the 9200, everything is using this method:

The Kibana HTTPS is using made with a Windows CA so it can be trusted when we navigate it using our clients

Any other ideas? This is kinda frustrating to deal with because its not really saying much....Ive validated the cert and the CN and SAN look correct

No, it's not clear...

Is that the Kibana address?
The error message says it's having trouble decoding the cert so perhaps cut and paste error eyc

I'm not sure why you're manually setting up the certs instead of just letting elastic do it for you. But yes those instructions should work.

A fresh install elastic will set up everything for you...

Then enroll Kibana....

Then I would test that it's all working..

Then set up the kibana HTTPS cert

I didn't realize you were on Windows... First you mentioned it... Which version...Did you go read the URL that you're pointed to Windows has some cert specific issues

You could try the following in your kibana.yml to test set to "none"

elasticsearch.ssl.verificationMode
Controls the verification of the server certificate that Kibana receives when making an outbound SSL/TLS connection to Elasticsearch. Valid values are "full", "certificate", and "none". Using "full" performs hostname verification, using "certificate" skips hostname verification, and using "none" skips verification entirely. Default: "full"

Also those are not all the logs of kibana you should see some failed to connect to elasticsearch messages .... or is perhaps kibana actually connected... did you try to log in?

Also I see this... in your elasticsearch log...

So something is trying to connect on HTTP not HTTP perhaps you were just testing something...

And BTW not obvious... when you have answered as many as I have... I have to ask... I have literally seen servers named server :slight_smile:

Is that the Kibana address?

Nope. From what I am seeing, that address is my client address. My PC basically that tries to connect to Kibana.

I'm not sure why you're manually setting up the certs instead of just letting elastic do it for you.

Because we have a Windows CA. It doesnt make much sense....

Right now, AFAIK, I went ahead and generated everything using the utilities by elastic (except Kibana thru HTTPS)

I didn't realize you were on Windows... First you mentioned it... Which version...Did you go read the URL that you're pointed to Windows has some cert specific issues

I missed this but my Elastic is installed on a Linux. My clients are obviously Windows, Windows 10.

:slight_smile: Not Obvious ... but understood now....

[2024-05-07T09:31:54,758][WARN ][o.e.h.AbstractHttpServerTransport] [server] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/172.20.5.100:9200, remoteAddress=/172.22.40.11:21723}

That is not a connection to Kibana; it is a connection to Elasticsearch, as it is in the Elasticsearch logs. So it is unclear to me what it is ... and whether it has any impact or bearing on this issue.

Also your kibana logs are either incomplete OR kibana is connecting to elasticsearch OK... which is possible

So what happens when you go to Kibana in the browser at this point?

Did you try testing the following in kibana.yml

elasticsearch.ssl.verificationMode: "none"

you can also try testing first without kibana server.ssl.* (ie HTTPS from Browser to Kibana)

What other kibana logs are there?

image

I do notice one thing.

So technically, nothing is not listening on 5601 BUT....in the conf, you can see the 5601 is set as the port on Kibana.

So, I dont understand

BTW, by mistake, I didnt post so sorry...

That is not a connection to Kibana; it is a connection to Elasticsearch, as it is in the Elasticsearch logs. So it is unclear to me what it is ... and whether it has any impact or bearing on this issue.

Yeah, Im not entirely sure WHY my client is trying to read to Elasticsearch. But I dont think that has to do with the issue.

Also your kibana logs are either incomplete OR kibana is connecting to elasticsearch OK... which is possible

I tried searching but I dont see any other kibana logs.

So what happens when you go to Kibana in the browser at this point?

ERR_CONNECTION_REFUSED ; Which makes sense because Kibana as you can see, is not listening.

I also ran across this:

The thing is (I agree with you) is that Kibana doesnt seem to log much....I changed it to "all" in yml just in case but its still very very very quiet...

Did you try testing the following in kibana.yml

elasticsearch.ssl.verificationMode: "none"

Yup, same thing.

you can also try testing first without kibana server.ssl.* (ie HTTPS from Browser to Kibana)

Without HTTPS it was working. I needed to change it to HTTPS because of some integration thing I believe.

Slight good news :slight_smile:

That being said, its been on this page for a few minutes so it still is not working

The message that keeps popping is:


[2024-05-08T10:24:11,125][WARN ][o.e.h.AbstractHttpServerTransport] [server] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/172.20.5.100:9200, remoteAddress=/172.22.40.11:58537}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
        at java.lang.Thread.run(Thread.java:1583) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
        at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:365) ~[?:?]
        at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:287) ~[?:?]
        at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:297) ~[?:?]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1353) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[?:?]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1295) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[?:?]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[?:?]
        ... 16 more

172.20.5.100 is my Elastic server
172.22.40.11 is my computer from where I am trying to connect to, thru a web browser, to Kibana

I feel that the "Kibana is not ready" is obviously a big step but still, I cant see logs thru the web interface

Any other information needed?

Apologies So there is something basic wrong.... and trying to debug it at this point will be difficult.

Here is what I would do if I were you ...

I would completely uninstall / clean / remove everything all configs, certs, data logs etc from the UNIX box

I would install from scratch, do not change any settings and configs and let Elasticsearch do its auto-config and then Enroll Kibana... (which at that point will be HTTP)

Following these instructions carefully example for deb

Then this

Then share the results and all the configs...

When that works, we will put Kibana on HTTPS.

Also please provide your installation approach (.dev, rpm etc) and please don't assume anything is obvious.... as details matter.

That would be my suggestion...

I want to avoid this like the plague.

One of the main reasons is that I dont want to lose my current logs, which are stored on a mount point. If I could back import my logs, great. All perfect.

But I dont want to do it all over again.

BTW, HTTP was working, I didnt have any issues with that :slight_smile:

The other issue is that I can install it but I cannot have your great awesome help along the way @stephenb as obviously you have other more important things to do.

What exactly does that mean... what HTTP was working... elasticsearch, kibana .. elasticsearch and kibana... when there was no SSL anywhere... what does that mean.

Sooooo backing way...way...way...way... up

Are you saying you had everything working without ANY SSL and then you were trying to establish SSL on everything... I really have lost all context.. .

In short Elastic and Kibana do not do anything special with Certs... This is all normal certs stuff ... which for sure is not fun... but in the end it is just normal cert stuff. I have used all the self-signed, I have publicly signed, etc.. they all work as long as they are all lined up...

I want to help... but I can not keep up with the context...

What exactly does that mean... what HTTP was working

That means I opened a web browser, put http://server:5601 , Kibana showed up, I logged in and it worked :slight_smile:

Are you saying you had everything working without ANY SSL and then you were trying to establish SSL on everything... I really have lost all context.. .

Yes: By default, I had access thru HTTP (hell, I might be able to revert to that, changing things in the config files) but I needed HTTPS because one of the integrations required it.

In short Elastic and Kibana do not do anything special with Certs... This is all normal certs stuff ... which for sure is not fun... but in the end it is just normal cert stuff. I have used all the self-signed, I have publicly signed, etc.. they all work as long as they are all lined up...

I agree. This all is certificate stuff, nothing special, nor intresting. The issue is that the its stating bad certificate when, the cert is valid.

Doing a check, the cert seems good:



[root@server ~]# openssl x509 -in /etc/elasticsearch/certs/server.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            5c:00:00:00:38:aa:55:9d:93:cc:aa:2b:68:00:00:00:00:00:38
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: DC=local, DC=company, CN=company-SRVDC1-CA
        Validity
            Not Before: Apr 30 14:39:06 2024 GMT
            Not After : Apr 30 14:39:06 2026 GMT
        Subject: C=US, ST=State, L=City, O=Company, OU=IT, CN=server
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:a1:43:63:8f:71:04:5c:c7:7f:43:a8:64:9e:45:
                    f3:a2:78:d0:c3:b1:26:ff:49:9f:22:d8:d2:6b:9a:
                    13:ae:37:07:5b:a5:a6:b3:89:07:72:2b:d3:ab:ca:
                    02:16:2c:ef:d5:dc:4a:1a:90:ac:91:c5:86:38:4e:
                    18:c6:eb:8e:a1:e7:77:aa:ed:01:4f:24:a6:7d:fe:
                    64:53:6e:2e:95:08:72:84:d9:19:e6:5c:16:0f:73:
                    2f:cc:56:a8:ec:a4:4d:c4:bf:13:df:51:60:b9:8f:
                    5c:a3:6e:a4:ae:b7:63:2d:9d:04:17:36:af:02:3d:
                    d5:9c:fd:b1:10:a3:82:e2:15:28:61:b0:76:b7:13:
                    73:c3:5f:48:dc:f8:4e:c1:5e:a6:a1:0f:21:10:65:
                    39:df:09:aa:61:9c:0d:46:19:69:f4:06:0a:69:c6:
                    ef:7d:47:7c:4d:45:0b:ac:8f:67:29:bf:a4:c6:26:
                    12:46:d6:c2:c3:8f:67:c0:4e:3b:a:d1:f5:16:6c:
                    b2:89:ce:a7:4b:ed:a:d1:a2:dd:61:b2:49:54:e7:
                    1a:f4:19:57:2a:b5:34:96:97:68:26:aa:30:81:98:
                    b7:1a:04:71:aa:0b:44:27:05:be:c6:e2:e8:8a:51:
                    d8:4d:ef:21:8f:3c:3e:aa:d8:c0:95:4d:ee:9c:70:
                    07:79
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            Microsoft certificate template:
                0-.%+.....7....W...........2........E...%..d...
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            Microsoft Application Policies Extension:
                0.0
..+.......
            X509v3 Subject Key Identifier:
                9F:80:66:B6:90:EB:94:11:F6:A2:58:10:43:11:47:72:FB:CB:0A:5A
            X509v3 Subject Alternative Name:
                DNS:server, DNS:server.fulldomain.local, DNS:172.20.5.100, IP Address:172.20.5.100
            X509v3 Authority Key Identifier:
                36:B8:F8:72:73:31:D9:11:4C:7A:49:3A:1E:CF:94:67:11:EF:68:4C
            X509v3 CRL Distribution Points:
                Full Name:
                  URI:ldap:///CN=company-SRVDC1-CA,CN=SRVCA01,CN=CDP,CN=Public%20Key%20Services,CN=Services,CN=Configuration,DC=company,DC=local?certificateRevocationList?base?objectClass=cRLDistributionPoint
                  URI:http://SRVCA01.company.local/CertEnroll/company-SRVDC1-CA.crl
            Authority Information Access:
                CA Issuers - URI:ldap:///CN=company-SRVDC1-CA,CN=AIA,CN=Public%20Key%20Services,CN=Services,CN=Configuration,DC=company,DC=local?cACertificate?base?objectClass=certificationAuthority
                CA Issuers - URI:http://SRVCA01.company.local/CertEnroll/SRVCA01.company.local_company-SRVDC1-CA.crt
                OCSP - URI:http://SRVCA01.company.local/ocsp
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        5a:e4:11:fc:df:b3:2a:10:5c:f7:d4:71:60:ce:3d:2d:01:8c:
        30:b8:16:d2:5a:c8:fc:48:d2:7f:1b:5b:df:84:f1:c1:db:7f:
        16:84:8d:d0:af:89:4a:8e:1d:16:4b:b7:59:2e:5b:80:8f:0a:
        a6:98:f4:38:03:da:28:6d:44:f0:b9:af:a0:e3:ed:fb:1f:45:
        02:b8:7a:2f:23:7e:4a:75:1e:5f:3b:0b:1b:65:27:6d:4c:40:
        d4:49:e3:71:42:d7:a8:13:17:77:11:6d:08:28:1c:3d:d6:1e:
        a5:4d:f9:a8:6b:68:02:e9:96:7a:34:27:85:86:df:ee:6d:63:
        69:e8:dc:32:31:79:ba:47:35:26:58:e9:80:02:d6:35:11:e3:
        f4:3b:3f:4e:a1:93:e0:56:d6:fb:9e:46:2f:a8:4d:21:18:be:
        16:ab:0f:35:b7:b5:ea:44:62:c4:27:4d:5f:0f:4f:cf:c1:58:
        2f:27:b3:a6:57:11:d7:cd:ed:4b:6b:21:5a:33:6e:54:85:2f:
        7f:65:a8:eb:91:24:15:b4:81:ff:98:fa:87:9f:3c:2e:f4:52:
        11:01:56:04:bb:25:bc:6a:13:40:62:01:f1:09:1a:19:3c:83:
        d7:95:b3:11:15:f1:0d:37:d4:4c:6f:4c:39:05:11:e5:2f:5a:

Is there a Discord for Elastic? I mean this way I ask can more in real time

The ONLY thing I could do is reinstall ELK BUT retaining my current logs.

I would Revert to this.

And then SSL by steps.

Add the transport SSL for elasticsearch. Make sure everything works.

Then Add the https for elasticsearch.

This is a key step. You need to make sure you can curl elasticsearch through the https interface and use the ca cert
Do this curl from the unix box.. not the windows box.

If you get to that point...
I am sure we can get Kibana connected to elasticsearch (still without kabana on https) but over the elasticsearch https interface.

Then we Can get Kibana working on HTTPs as well.

There is a slack Channel You can see that icon at the top of the page.

OK, Im gonna try to revert and get it to work again.

I just joined Slack so maybe Ill get some advice there

Im putting my old .yml files in a folder called "oldconfs" ; That doesnt affect anything right?

I ask because its saying I have a old parameter

[2024-05-10T08:55:24,996][ERROR][o.e.b.Elasticsearch      ] [server] fatal exception while booting Elasticsearch
org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml : [xpack.security.transport.ssl.keystore.secure_password,xpack.security.transport.ssl.truststore.secure_password]

But wait...

.....Um, has Elastic lost its goddamn mind?

Well, Im gonna say you are right @stephenb because that error message makes no sense.

To make things more simple, Im gonna go with Debian; Maybe this will make troubleshooting easier, I dont know

Should I open a new thread or can we continue here?

Well, followed those steps and now I get a connection refused.....

Im gonna take a look at the logs