Error while join a node, no subject alternative names

I have a 6 node cluster, 3 masters, 3 data, and I am trying to add 4 more data nodes.

The cluster was created using version 8.18.x and I have upgrade it to 9.1.5.
It was installed using .RPM, but since I got liberation, I started to use the package manager(employer policies).

Well I installed the elasticsearch using dnf install elasticsearch them I went to one of the nodes and ran elasticsearch-create-enrollment-token -s node --url https://lgqa:9200 , I need use the node name else it doesn’t work, I used the auto security setup when the cluster was created. Going back to the node which I want to join the cluster, I ran elasticsearch-reconfigure-node -v --enrollment-token TOKEN and I am receiving this error:

12:09:18.290 [main] DEBUG org.elasticsearch.xpack.core.ssl.SSLService - using ssl settings [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2]]]
12:09:18.386 [main] DEBUG org.elasticsearch.xpack.core.ssl.SSLService - SSL configuration [xpack.security.transport.ssl] is [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2]]]
12:09:18.387 [main] DEBUG org.elasticsearch.xpack.core.ssl.SSLService - SSL configuration [xpack.security.http.ssl] is [SslConfiguration[settingPrefix=, explicitlyConfigured=false, trustConfig=JDK-trusted-certs, keyConfig=empty-key-config, verificationMode=FULL, clientAuth=REQUIRED, ciphers=[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA], supportedProtocols=[TLSv1.3, TLSv1.2]]]
Unable to communicate with the node on https://10.0.208.65:9200/_security/enroll/node. Error was (certificate_unknown) No subject alternative names matching IP address 10.0.208.65 found
ERROR: Aborting enrolling to cluster. Could not communicate with the node on any of the addresses from the enrollment token. All of [10.0.208.65:9200] were attempted., with exit code 69.

I think the certificate was generated using only the hostname, and it is trying to connect using FQDN, just speculation. Also we are using a VIP using ADC, which reencrypt it using the alias elastic.domain.

The config is:

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: logsapp
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: ${HOSTNAME}
node.roles: [ master ]
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /mnt/shared/var/lib/elasticsearch/${HOSTNAME}
#
# Path to log files:
#
path.logs: /var/log/elasticsearch

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 17-10-2024 11:12:04
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Discover existing nodes in the cluster
discovery.seed_hosts:
  - lgqa:9300
  - lgqb:9300
  - lgqc:9300
  - lgqd:9300
  - lgqe:9300
  - lgqf:9300

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

I am thinking in recreate the certificate, to use the FQDN, but I have doubts if this would help at all and if will not break my cluster.

PS: The path.data is modified to use NFS share, I lost the battle to not use it.

Well I tried to use the certutil passing http on it, but when I try to create a token it throws:
ERROR: Unable to create an enrollment token. Elasticsearch node HTTP layer SSL configuration Keystore doesn't contain any PrivateKey entries where the associated certificate is a CA certificate, with exit code 73

I tried to create a CA before using HTTP, doing that the Elasticsearch don´t trust the CA.

Why auto setup works but using elasticsearch utilities don’t?

Hi @gbschenkel

Apologies but without a long explanation that is a very confusing error message for a end user to understand .

In short, you don't use an enrollment token when you're manually configuring your nodes or created your own certs. It's not going to work, which is what it seems like you're doing.

You just need to manually configure your nodes, with proper certs and discovery settings

This is a little old but perhaps it will help.

1 Like

Hi @stephenb, I am only trying to do this because from what I could find, the elasticsearch cluster was moved from one VLAN to another, changing all IPs, and from what I read on this forum, documentation, and using AI, I found out when generating token, it will send the IP on the token, not the hostname, for not relay on the DNS, then the IP on the token is not the same when the certificate was generated, it show the No subject alternative names matching IP address x.x.x.x found, that is why I start to attempt to generate a new certificate to reflect the new IP, using the same CA used on auto-configured by security. My last attempt was to create an CA and HTTP just for HTTP, to not change anything related to Transport. I even tried to use http.publish_host to try check if the token could be generated with just the hostname.

But if I need configure all manually, I will do that instead of trying to “fix” the auto-configured, I was just wanting to maintain that, because the internal CA from my employer use HSM and live on the Mainframe, I was wanting something which I could myself, or my team to handle, the process to request certificates is very time costing. Right now the reverse proxy is who handle the valid certificate.

Trying to fix the auto configure probably not going to work / not supported

Just generate you new proper certs , (I would put the IPS and the host names in makes it a little more flexible )configure them and try to start the nodes with proper discovery.

Note: they're probably settings in the elastic keystore which are no longer valid or may need to be deleted or changed as well if you create passwords on your certs etc.

If you leave settings in there, they will be pulled in and tried to be used to to open the certs and if that's incorrect it will cause failure

So check that as well

2 Likes