Elasticsearch

‘‘‘
Hi Folks,

#lasticsearch.yml file of node ip ending in 17 ( 8.15.2) still not upgraded
’’’
path.data: /var/lib/elasticsearch/data
path.logs: /var/log/elasticsearch/logs

xpack.security.enabled: false
xpack.security.enrollment.enabled: false

xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12

xpack.security.transport.ssl:
enabled: false
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12

#cluster.initial_master_nodes:
#- node1
#- node2
#- node3

network.host: localhost,192.168.0.10
http.port: 9201

searchguard.enterprise_modules_enabled: false

thread_pool.write.queue_size: 1000

xpack.security.http.ssl.supported_protocols:

  • TLSv1.3
    xpack.security.transport.ssl.supported_protocols:
  • TLSv1.3

http.max_content_length: 500mb
indices.query.bool.max_clause_count: 200000
thread_pool.search.size: 50

searchguard.ssl.transport.pemkey_filepath: node.key
searchguard.ssl.transport.pemcert_filepath: node-cert.pem
searchguard.ssl.transport.pemtrustedcas_filepath: ca-cert.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.enabled_protocols:

  • TLSv1.2
  • TLSv1.3

searchguard.ssl.http.pemkey_filepath: node.key
searchguard.ssl.http.pemcert_filepath: node-cert.pem
searchguard.ssl.http.pemtrustedcas_filepath: ca-cert.pem
searchguard.ssl.http.enabled: true

searchguard.ssl.http.enabled_ciphers:

  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256

searchguard.ssl.http.enabled_protocols:

  • TLSv1.2
  • TLSv1.3

searchguard.authcz.admin_dn:

searchguard.nodes_dn:

searchguard.check_snapshot_restore_write_privileges: true

searchguard.restapi.roles_enabled:

  • SGS_ALL_ACCESS

discovery.seed_hosts:

  • 192.168.0.11
  • 192.168.0.12
  • 192.168.0.13

cluster.name: elasticsearch
node.name: node1
node.roles: [master]

bootstrap.memory_lock: true
’’’

I was upgrading two data nodes first from ES8.15.2 to 8.19.3 of three nodes cluster.
Cluster is not forming. node with Ip1 and node with ip2 are upgraded from 8.15.2 to 8.19.3 and on master-only node3 there is still ES 8.15.2 running.

ES service is up and running on are three nodes. On two nodes ES8.19.3 is installed and on third one ES8.15.2 is installed.

Below is curl command to list all cluster nodes, Its output throws an error as
master_not_discovered_exception

#curl -XGET -u user:pass https://localhost:9201/_cat/nodes?v -k
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}[root@localhost Elasticsearch]#

#yaml file of ip1 ES8.19.3 Data node with ssl
[root@localhost Elasticsearch]# vi /etc/elasticsearch/elasticsearch.yml
path.data: /var/lib/elasticsearch/elasticsearch
path.logs: /var/log/elasticsearch/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: false
xpack.security.http.ssl:
enabled: true
key: cenode.key
certificate: cecert.pem
certificate_authorities: cacert.pem
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
key: cenode.key
certificate: cecert.pem
certificate_authorities: cacert.pem
cluster.initial_master_nodes:

  • ip1

  • ip2

  • ip3
    network.host: localhost,ip1
    http.port: 9201
    http.max_content_length: 500mb
    indices.query.bool.max_clause_count: 200000
    thread_pool.write.queue_size: 1000
    thread_pool.search.size: 50
    xpack.security.http.ssl.supported_protocols:

  • TLSv1.2

  • TLSv1.3
    xpack.security.http.ssl.cipher_suites:

  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

  • TLS_AES_256_GCM_SHA384

  • TLS_CHACHA20_POLY1305_SHA256

  • TLS_AES_128_GCM_SHA256

  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384

  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

  • TLS_AES_128_CCM_8_SHA256

  • TLS_AES_128_CCM_SHA256
    xpack.security.transport.ssl.supported_protocols:

  • TLSv1.2

  • TLSv1.3
    discovery.seed_hosts:

  • ip1

  • ip2

  • ip3
    cluster.name: elasticsearch
    node.name: localhost
    node.roles: [data, master]
    bootstrap.memory_lock: true

    yaml file of ip2 ES8.19.3 Data node with ssl

[root@localhost Elasticsearch]# vi /etc/elasticsearch/elasticsearch.yml
path.data: /var/lib/elasticsearch/elasticsearch
path.logs: /var/log/elasticsearch/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: false
xpack.security.http.ssl:
enabled: true
key: cenode.key
certificate: cecert.pem
certificate_authorities: cacert.pem
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
key: cenode.key
certificate: cecert.pem
certificate_authorities: cacert.pem
cluster.initial_master_nodes:

  • ip1
  • ip2
  • ip3
    network.host: localhost,ip2
    http.port: 9201
    http.max_content_length: 500mb
    indices.query.bool.max_clause_count: 200000
    thread_pool.write.queue_size: 1000
    thread_pool.search.size: 50
    xpack.security.http.ssl.supported_protocols:
  • TLSv1.2
  • TLSv1.3
    xpack.security.http.ssl.cipher_suites:
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256
    xpack.security.transport.ssl.supported_protocols:
  • TLSv1.2
  • TLSv1.3
    discovery.seed_hosts:
  • ip1
  • ip2
  • ip3
    cluster.name: elasticsearch
    node.name: localhost
    node.roles: [data, master]
    bootstrap.memory_lock: true

#curl command to list all cluster nodes
’curl -XGET -u user:pass https://localhost:9201/_cat/nodes?v -k’
’’’
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}[root@localhost Elasticsearch]#
’’’

Usually cluster forms but sometime I got stuck at this problem.

‘‘‘
#Logs elasticsearch.log
see Troubleshooting discovery | Elasticsearch Guide [8.19] | Elastic
[2025-10-28T09:56:30,853][INFO ][o.e.c.c.ElectionSchedulerFactory] [localhostjimmysecond] retrying master election after [390] failed attempts; election attempts are currently scheduled up to [10000ms] apart
[2025-10-28T09:56:38,866][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhostjimmysecond] master not discovered or elected yet, an election requires 2 nodes with ids [wCc8I8nFRGu3aEAgy7BO2g, O8-JnJJnSc-mW69ho5b9hw], have discovered possible quorum [{localhostjimmysecond}{wCc8I8nFRGu3aEAgy7BO2g}{vMo5_Gk7RL-oHo26dec0Gw}{localhostjimmysecond}{X.X.97.18}{X.X97.18:9300}{dm}{8.19.3}{7000099-8536000}, {localhost.localdomain}{O8-JnJJnSc-mW69ho5b9hw}{3MF2LSx4Q1esTwGbKpxO9Q}{localhost.localdomain}{X.X.97.17}{X.X.97.17:9300}{m}{8.15.2}{7000099-8512000}, {localhostjimmy}{iWBhVh_4R82to5nuOd9PRA}{TAYCLd2oRlWrF0JnrkRLAw}{localhostjimmy}{X.X.97.19}{X.X97.19:9300}{dm}{8.19.3}{7000099-8536000}]; discovery will continue using [X.X.97.19:9300, X.X.97.17:9300] from hosts providers and [{localhostjimmysecond}{wCc8I8nFRGu3aEAgy7BO2g}{vMo5_Gk7RL-oHo26dec0Gw}{localhostjimmysecond}{X.X.97.18}{X.X.97.18:9300}{dm}{8.19.3}{7000099-8536000}] from last-known cluster state; node term 0, last-accepted version 0 in term 0; for troubleshooting guidance, see Troubleshooting discovery | Elasticsearch Guide [8.19] | Elastic
[2025-10-28T09:56:48,867][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhostjimmysecond] master not discovered or elected yet, an election requires 2 nodes with ids [wCc8I8nFRGu3aEAgy7BO2g, O8-JnJJnSc-mW69ho5b9hw], have discovered possible quorum [{localhostjimmysecond}{wCc8I8nFRGu3aEAgy7BO2g}{vMo5_Gk7RL-oHo26dec0Gw}{localhostjimmysecond}{X.X97.18}{X.X.97.18:9300}{dm}{8.19.3}{7000099-8536000}, {localhost.localdomain}{O8-JnJJnSc-mW69ho5b9hw}{3MF2LSx4Q1esTwGbKpxO9Q}{localhost.localdomain}{X.X97.17}{X.X97.17:9300}{m}{8.15.2}{7000099-8512000}, {localhostjimmy}{iWBhVh_4R82to5nuOd9PRA}{TAYCLd2oRlWrF0JnrkRLAw}{localhostjimmy}{X.X97.19}{X.X.97.19:9300}{dm}{8.19.3}{7000099-8536000}]; discovery will continue using [X.X97.19:9300, X.X.97.17:9300] from hosts providers and [{localhostjimmysecond}{wCc8I8nFRGu3aEAgy7BO2g}{vMo5_Gk7RL-oHo26dec0Gw}{localhostjimmysecond}{X.X.97.18}{X.X.97.18:9300}{dm}{8.19.3}{7000099-8536000}] from last-known cluster state; node term 0, last-accepted version 0 in term 0; for troubleshooting guidance, see Troubleshooting discovery | Elasticsearch Guide [8.19] | Elastic
’’’

Try to edit the message and better format the logs and config files please. Its is very difficult to read your message. You appear to have cluster.initial_master_nodes set on the 2 configuration files you did share. Those settings are not needed after the 3-node cluster has fully formed. discovery.seed_hosts should be enough. There is often a message telling you this in the logs.

Since you have 3 nodes in your cluster, please share all 3 nodes configuration and logs.

Correct anything I got wrong - your cluster was 3 nodes but only 2 of these (ip1 + ip2) were data nodes. And only one of those data nodes (ip1) has been updated to 8.19.3, along with master-only ip3 ? The 3 nodes in your cluster appear to be called, localhostjimmy == ip1, localhostjimmysecond == ip2 , localhost.localdomain == ip3, IMHO thats poor node naming but shouldn’t really matter.

1 Like

Hi RainTown, thanks for your reply.

It was a three nodes cluster(8.15.2) installed. Then I have started upgrade.

In Upgrade , in my environment it uninstall Old ES(8.15.2) and then Fresh install 8.19.3.

I have started upgrade on node with ip1 and ip2 . Now On node (ip1+ip2) are upgraded to 8.19.3 as data nodes.
3rd node with ip3 still has (8.15.2) master node installed.
That’s why cluster.initial_master_nodes: setting is not comented out.

On Third node there is still ES8.15.2 is installed.

Can you share the 3 configuration files correctly formatted?

Use the prefromatted text button, the </> to format them, it is really complicated to understand what is the issue and what are the configuration files.

But a couple of things, once a cluster has formed, you need to remove the cluster.initial_master_nodes, unless you removed all data of all nodes, and are starting a fresh cluster, you need to remove this from all configuration files.

If you upgrade 2 nodes to 8.19.3, you also need to upgrade the last one, a node on version 8.15.2 will not join a cluster with master nodes on 8.19.3.

1 Like

“If you upgrade 2 nodes to 8.19.3, you also need to upgrade the last one, a node on version 8.15.2 will not join a cluster with master nodes on 8.19.3.”

Yes but usually two nodes cluster forms. If I start upgrading third node then Yes cluster will form. But now two nodes cluster is not forming . Curl command is throwing this error

curl -XGET -u username:password https://localhost:9201/_cat/nodes?v -k

{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}[root@localhostjimmy ~]

You need to share the formatted configuration as asked to make it easier to understand how your cluster is configured.

If you have 2 cluster on the same version and they are not forming a cluster, then probably there is something wrong in your configuration.

1 Like

Er, whats your reason for not upgrading the 3rd node? You are going to have to do this at some point.

It’s not a “master node” any more, and it wont be again until you upgrade.

But as told already, you should remove the cluster.initial_master_nodes from the config file, on assumption you had a working 3-node cluster, and you didn’t “wipe everything” on the 2 nodes you already upgraded.

1 Like

Yes, I am going to upgrade third node eventually . Actually when I upgrade third node then cluster forms also.

But 8/10 two node cluster also forms. But why this time its not happening.
Yes, I do remove cluster.initial_master_nodes setting once cluster is formed. But now cluster is not formed yet. Thats why this setting is not removed.

Question is this, why I get this error

’curl -XGET -u user:pass https://localhost:9201/_cat/nodes?v -k’
’{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}’

Any official documentation which states about these errors attached in elasticsearch.logs and how can I remove them.

OK, sorry, but you are not articulating clearly what you are trying to do/prove/demonstrate/…

Me, I’d rather completely my upgrade, get my cluster formed, and go on to the next task. It’s an old school approach I know, but it’s served me pretty well :slight_smile:

1 Like

Yes! I completely agree with you. But its a design/requirement by my company ”two node cluster should form”.
I am doing R&D now.

If you had this cluster working before on a previous version, you should remove the cluster.initial_master_nodes setting, it doesn't matter if you are going to upgrade, shutdown everything etc, once a cluster is bootstraped the first time, this setting should be removed.

But its a design/requirement by my company ”two node cluster should form”.

This is not exactly how elasticsearch works, if you have a cluster running with 3 master nodes and you lose one of them, your cluster will still work with 2 master nodes running.

But If you shutdown your nodes, to form a cluster again the master election process needs a quorum to form a majority, which is not possible with just 2 nodes running, so you need to upgrade your third node to the same version as the others and start it.

I recommend that you check this documentation.

Check the important note on the documentation page.

If you stop half or more of the nodes in the voting configuration at the same time then the cluster will be unavailable until you bring enough nodes back online to form a quorum again. While the cluster is unavailable, any remaining nodes will report in their logs that they cannot discover or elect a master node.

1 Like

Do both the nodes you posted config for have the same node name?

2 Likes

Yes @Christian_Dahlqvist

Both nodes have same nodename

I have always thought node names need to be unique, but do not see this mentioned in the documentation. Was this configured the same way in the old version or is this something that may have changed?

1 Like

Not sure about old one, but in ES8 I usually use as default nodename.

I think it is a recommendation, it is pretty common to have a unique node name as using the same node name for multiple nodes is really confusing and does not make much sense, but I don't think it does anything else, the node uses the node id to talk with each other and know about each node.

1 Like

#I have just added elasticsearch.yml file info for 3rd node(8.15.2)
’’’
path.data: /var/lib/elasticsearch/data
path.logs: /var/log/elasticsearch/logs

xpack.security.enabled: false
xpack.security.enrollment.enabled: false

xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12

xpack.security.transport.ssl:
enabled: false
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12

#cluster.initial_master_nodes:
#- node1
#- node2
#- node3

network.host: localhost,ip3
http.port: 9201

searchguard.enterprise_modules_enabled: false

thread_pool.write.queue_size: 1000

xpack.security.http.ssl.supported_protocols:

  • TLSv1.3
    xpack.security.transport.ssl.supported_protocols:
  • TLSv1.3

http.max_content_length: 500mb
indices.query.bool.max_clause_count: 200000
thread_pool.search.size: 50

searchguard.ssl.transport.pemkey_filepath: node.key
searchguard.ssl.transport.pemcert_filepath: node-cert.pem
searchguard.ssl.transport.pemtrustedcas_filepath: ca-cert.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.enabled_protocols:

  • TLSv1.2
  • TLSv1.3

searchguard.ssl.http.pemkey_filepath: node.key
searchguard.ssl.http.pemcert_filepath: node-cert.pem
searchguard.ssl.http.pemtrustedcas_filepath: ca-cert.pem
searchguard.ssl.http.enabled: true

searchguard.ssl.http.enabled_ciphers:

  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256

searchguard.ssl.http.enabled_protocols:

  • TLSv1.2
  • TLSv1.3

searchguard.authcz.admin_dn:

searchguard.nodes_dn:

searchguard.check_snapshot_restore_write_privileges: true

searchguard.restapi.roles_enabled:

  • SGS_ALL_ACCESS

discovery.seed_hosts:

  • node1
  • node2
  • node3

cluster.name: elasticsearch
node.name: node3
node.roles: [master]

bootstrap.memory_lock: true
’’’

It looks like you are trying to use Searchguard. Is this only configured on one node or have you removed this config from the other nodes you posted earlier?

The security config need to be the same across the whole cluster and you can not use Elastic security on some nodes and Searchguard on some. Searchguard is also a third party plugin that is not supported here so if you are looking to use this I would recommend you reach out to that community.

1 Like

Have you read the shared documentation on my previous answer? This one ?

If your 2 other nodes are already on 8.19.2 this node will never join the cluster until it is upgrade, if your 2 other nodes also cannot form a cluster, they will also not form a cluster until you have a majority quorum, so you need 3 nodes running to form the cluster again.

Also, you are using searchguard, which is a third-party plugin that is not supported here and that has impact on the communication between nodes, we cannot provide any insight about it because mostly people here do not use it.

You need to upgrade your last node and see if you can form a cluster, if not, you should remove all searchguard configurations and try using only native elasticsearch settings to see if the issue persists.

1 Like

Thanks @leandrojmp
Yes! upgrade to third node forms the cluster. In Previous package we were using searchguard in 8.15.2 and now we have removed searchguard completely.

Actually this is a blocker bug in my company (Upgraded ES 8.19.3 should form a cluster should form. Eventually I have to solve this.

I have try to stop service on .17 node (ES8.15.2) then delete data and then restarting on two nodes 8.19.3. But all unvain.
rm -rf /var/lib/elasticsearch/nodes

Same output:
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}