Elastic Search Single Node Cluster not working

I am trying to set up single node elastic search. But master node is not at all discovering. If I start with cluster.initial_master_nodes as one node.
Master is elected but cluster formation is not happening and RemoteTransportException is occured. I tried too hard and multiple possibilites but no luck.

Here is my configuration.

I tried the same configuration with windows its working fine. But linux cent os I am facing the problem.

Os Version: CentOS Linux release 7.9.2009 (Core)
Elastic : 7.11.2 RPM Distribution

Node1

cluster.name: escluster
node.name: master-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["master-1", "master-2","master-3"]
node.max_local_storage_nodes: 3

Node2

cluster.name: escluster
node.name: master-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9201
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["master-1", "master-2","master-3"]
node.max_local_storage_nodes: 3

Node3
cluster.name: escluster
node.name: master-3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9202
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["master-1", "master-2","master-3"]
node.max_local_storage_nodes: 3

All the three nodes are starting successfully. But the cluster formation is not happening.

[2021-04-10T15:04:22,868][INFO ][o.e.t.TransportService ] [master-1] publish_address {x.x.x.x:9301}, bound_addresses {[::]:9301}
[2021-04-10T15:04:23,246][INFO ][o.e.b.BootstrapChecks ] [master-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2021-04-10T15:04:23,435][WARN ][o.e.c.c.ClusterBootstrapService] [master-1] bootstrapping cancelled
java.lang.IllegalStateException: requirement [master-1] matches multiple nodes: [{master-1}{yQiAG1tJS7uwgNiyJ3suWg}{k4haC3AdQXiSrceLevHHaA}
{x.x.x.x}{x.x.x.x:9301}{cdhilmrstw}{ml.machine_memory=8078663680, xpack.installed=true, transform.node=true, ml.max_open_jobs=20, ml.max_jvm_size=536870912},
{master-1}{H4T0B4AdTiWVYHW9Dzi0qw}{tQlVqLzYT3OJXtaug6K3cQ}{x.x.x.x}{x.x.x.x:9300}{cdhilmrstw}{ml.machine_memory=8078663680, ml.max_open_jobs=20, xpack.installed=true,
ml.max_jvm_size=536870912, transform.node=true}]

curl -XGET http://127.0.0.1:9200/_cluster/health?pretty
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}

[2021-04-10T15:24:20,990][WARN ][o.e.c.c.ClusterFormationFailureHelper] [master-1] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster,
and this node must discover master-eligible nodes [master-1, master-2, master-3] to bootstrap a cluster: have discovered [{master-1}{RVGdv8o4R7mEZS8G3pF9IQ}
{Ue_VMHc_RFqOjLeW9aJiUw}{x.x.x.x}{x.x.x.x:9302}{cdhilmrstw}{ml.machine_memory=8078663680, xpack.installed=true, transform.node=true, ml.max_open_jobs=20,
ml.max_jvm_size=536870912}, {master-1}{H4T0B4AdTiWVYHW9Dzi0qw}{tQlVqLzYT3OJXtaug6K3cQ}{x.x.x.x}{x.x.x.x:9300}{cdhilmrstw}{ml.machine_memory=8078663680,
ml.max_open_jobs=20, xpack.installed=true, ml.max_jvm_size=536870912, transform.node=true},
{master-1}{yQiAG1tJS7uwgNiyJ3suWg}{k4haC3AdQXiSrceLevHHaA}{x.x.x.x}{x.x.x.x:9301}{cdhilmrstw}{ml.machine_memory=8078663680, ml.max_open_jobs=20, xpack.installed=true, ml.max_jvm_size=536870912, transform.node=true}]; discovery will continue using [127.0.0.1:9300] from hosts providers and [{master-1}{RVGdv8o4R7mEZS8G3pF9IQ}{Ue_VMHc_RFqOjLeW9aJiUw}{x.x.x.x}{x.x.x.x:9302}{cdhilmrstw}{ml.machine_memory=8078663680, xpack.installed=true, transform.node=true, ml.max_open_jobs=20, ml.max_jvm_size=536870912}]
from last-known cluster state; node term 0, last-accepted version 0 in term 0

Any help will be greatly appreciated!!!

Your nodes are all called master-1.

Also this setting is deprecated as it's a little dangerous:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.