I have a 6 node cluster, 3 masters, 3 data, and I am trying to add 4 more data nodes.
The cluster was created using version 8.18.x and I have upgrade it to 9.1.5.
It was installed using .RPM, but since I got liberation, I started to use the package manager(employer policies).
Well I installed the elasticsearch using dnf install elasticsearch
them I went to one of the nodes and ran elasticsearch-create-enrollment-token -s node --url https://lgqa:9200
, I need use the node name else it doesn’t work, I used the auto security setup when the cluster was created. Going back to the node which I want to join the cluster, I ran elasticsearch-reconfigure-node -v --enrollment-token TOKEN
and I am receiving this error:
12:09:18.290 [main] DEBUG org.elasticsearch.xpack.core.ssl.SSLService - using ssl settings [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2]]]
12:09:18.386 [main] DEBUG org.elasticsearch.xpack.core.ssl.SSLService - SSL configuration [xpack.security.transport.ssl] is [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2]]]
12:09:18.387 [main] DEBUG org.elasticsearch.xpack.core.ssl.SSLService - SSL configuration [xpack.security.http.ssl] is [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2]]]
Unable to communicate with the node on https://10.0.208.65:9200/_security/enroll/node. Error was (certificate_unknown) No subject alternative names matching IP address 10.0.208.65 found
ERROR: Aborting enrolling to cluster. Could not communicate with the node on any of the addresses from the enrollment token. All of [10.0.208.65:9200] were attempted., with exit code 69.
I think the certificate was generated using only the hostname, and it is trying to connect using FQDN, just speculation. Also we are using a VIP using ADC, which reencrypt it using the alias elastic.domain.
The config is:
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: logsapp
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: ${HOSTNAME}
node.roles: [ master ]
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /mnt/shared/var/lib/elasticsearch/${HOSTNAME}
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 17-10-2024 11:12:04
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Discover existing nodes in the cluster
discovery.seed_hosts:
- lgqa:9300
- lgqb:9300
- lgqc:9300
- lgqd:9300
- lgqe:9300
- lgqf:9300
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
I am thinking in recreate the certificate, to use the FQDN, but I have doubts if this would help at all and if will not break my cluster.
PS: The path.data is modified to use NFS share, I lost the battle to not use it.