Forming Elasticsearch Cluster (v. 7.0.1)

Hi Elasticians,
I am trying to form 1 cluster of 3 nodes. Could someone explain me why a cluster is not formed of this 3 nodes?

PART 1:

Intro
I am working with ES 7.0.1 with 3 nodes. All nodes are master-eligible nodes.

Hostnames and IPs of nodes:
n1-7 10.88.88.231
n2-7 10.88.88.232
n3-7 10.88.88.233

Cluster UUID
Cluster UUID is the same on all 3 nodes:

curl --silent -XGET localhost:9200 | grep cluster_uuid
  "cluster_uuid" : "gQ5e4nc1RLyI-HC2NPZmWw",

Listening services

netstat -tulpn | grep 9[23]00

#n1-7
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      10953/java          
tcp6       0      0 10.88.88.231:9300       :::*                    LISTEN      10953/java          
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      10953/java    

#n2-7
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      9372/java           
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      9372/java           
tcp6       0      0 10.88.88.232:9300       :::*                    LISTEN      9372/java 

#n3-7
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      9248/java           
tcp6       0      0 10.88.88.233:9300       :::*                    LISTEN      9248/java           
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      9248/java   

ES configuration on node "n1-7"

cluster.name: local.logs.itles.cz
cluster.remote.connect: false
node.name: ${HOSTNAME}
node.master: true
node.data: false
node.ingest: false
http.port: 9200
http.host: [ "_lo:ipv4_" ]
transport.host: [ "_lo:ipv4_", "_enp0s3:ipv4_" ]
transport.publish_port: 9300
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.monitoring.enabled: true
xpack.ml.enabled: false
xpack.security.enabled: false
xpack.security.audit.enabled: false
xpack.watcher.enabled: false
discovery.seed_hosts:
  - 10.88.88.231
  - 10.88.88.232
  - 10.88.88.233
cluster.initial_master_nodes:
  - n1-7
  - n2-7
  - n3-7
logger.org.elasticsearch.cluster.coordination.ClusterBootstrapService: TRACE
logger.org.elasticsearch.discovery: TRACE

ES configuration on node "n2-7"

cluster.name: local.logs.itles.cz
cluster.remote.connect: false
node.name: ${HOSTNAME}
node.master: true
node.data: true
node.ingest: true

http.port: 9200
http.host: [ "_lo:ipv4_" ]
transport.host: [ "_lo:ipv4_", "_enp0s3:ipv4_" ]
transport.publish_port: 9300

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

xpack.monitoring.enabled: true
xpack.ml.enabled: false
xpack.security.enabled: false
xpack.security.audit.enabled: false
xpack.watcher.enabled: false

discovery.seed_hosts:
  - 10.88.88.231
  - 10.88.88.232
  - 10.88.88.233

cluster.initial_master_nodes:
  - n1-7
  - n2-7
  - n3-7

logger.org.elasticsearch.cluster.coordination.ClusterBootstrapService: TRACE
logger.org.elasticsearch.discovery: TRACE

ES configuration on node "n3-7"

cluster.name: local.logs.itles.cz
cluster.remote.connect: false
node.name: ${HOSTNAME}
node.master: true
node.data: true
node.ingest: true

http.port: 9200
http.host: [ "_lo:ipv4_" ]
transport.host: [ "_lo:ipv4_", "_enp0s3:ipv4_" ]
transport.publish_port: 9300

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

xpack.monitoring.enabled: true
xpack.ml.enabled: false
xpack.security.enabled: false
xpack.security.audit.enabled: false
xpack.watcher.enabled: false

discovery.seed_hosts:
  - 10.88.88.231
  - 10.88.88.232
  - 10.88.88.233

cluster.initial_master_nodes:
  - n1-7
  - n2-7
  - n3-7

logger.org.elasticsearch.cluster.coordination.ClusterBootstrapService: TRACE
logger.org.elasticsearch.discovery: TRACE

Hostnames
hostnames of servers are: n1-7, n2-7, n3-7

cat /etc/hosts
10.88.88.231 n1-7
10.88.88.232 n2-7
10.88.88.233 n3-7

PART 2:

Cluster health
Cluster health is green on all nodes.

curl --silent -XGET localhost:9200/_cat/health?h=status
green


 #on node "n1-7"
 curl --silent -XGET localhost:9200/_cluster/health?pretty
 {
  "cluster_name" : "local.logs.itles.cz",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 0,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

 #on node "n2-7" and "n3-7"
    {
      "cluster_name" : "local.logs.itles.cz",
      "status" : "green",
      "timed_out" : false,
      "number_of_nodes" : 1,
      "number_of_data_nodes" : 1,
      "active_primary_shards" : 2,
      "active_shards" : 2,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 100.0
    }

Cluster Nodes

[VM][n1-7]:~# curl --silent -XGET localhost:9200/_cat/nodes
10.88.88.231 20 57 3 0.20 0.11 0.13 m * n1-7

[VM][n2-7]:~# curl --silent -XGET localhost:9200/_cat/nodes
10.88.88.232 7 55 0 0.00 0.03 0.11 mdi * n2-7

[VM][n3-7]:~# curl --silent -XGET localhost:9200/_cat/nodes
10.88.88.233 8 55 0 0.00 0.02 0.10 mdi * n3-7

FW
Communication between nodes is not firewalled. Every node can connect to another node to port 9300.

Logs

less /var/log/elasticsearch/local.logs.itles.cz.log

PART 3:

node n1-7

[2019-06-17T08:10:31,623][INFO ][o.e.p.PluginsService     ] [n1-7] no plugins loaded
[2019-06-17T08:10:33,376][DEBUG][o.e.d.z.ElectMasterService] [n1-7] using minimum_master_nodes [-1]
[2019-06-17T08:10:34,557][DEBUG][o.e.d.SettingsBasedSeedHostsProvider] [n1-7] using initial hosts [10.88.88.231, 10.88.88.232, 10.88.88.233]
[2019-06-17T08:10:34,576][INFO ][o.e.d.DiscoveryModule    ] [n1-7] using discovery type [zen] and seed hosts providers [settings]
[2019-06-17T08:10:35,049][INFO ][o.e.n.Node               ] [n1-7] initialized
[2019-06-17T08:10:35,049][INFO ][o.e.n.Node               ] [n1-7] starting ...
[2019-06-17T08:10:35,162][INFO ][o.e.t.TransportService   ] [n1-7] publish_address {10.88.88.231:9300}, bound_addresses {127.0.0.1:9300}, {10.88.88.231:9300}
[2019-06-17T08:10:35,167][INFO ][o.e.b.BootstrapChecks    ] [n1-7] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-06-17T08:10:35,172][DEBUG][o.e.d.SeedHostsResolver  ] [n1-7] using max_concurrent_resolvers [10], resolver timeout [5s]
[2019-06-17T08:10:35,175][TRACE][o.e.d.PeerFinder         ] [n1-7] activating with nodes: 
   {n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{5vquvlmORQSb3ykj6ncjVg}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}, local

[2019-06-17T08:10:35,176][TRACE][o.e.d.PeerFinder         ] [n1-7] probing master nodes from cluster state: nodes: 
   {n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{5vquvlmORQSb3ykj6ncjVg}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}, local

[2019-06-17T08:10:35,176][TRACE][o.e.d.PeerFinder         ] [n1-7] startProbe(10.88.88.231:9300) not probing local node
[2019-06-17T08:10:35,204][TRACE][o.e.d.SeedHostsResolver  ] [n1-7] resolved host [10.88.88.231] to [10.88.88.231:9300]
[2019-06-17T08:10:35,206][TRACE][o.e.d.SeedHostsResolver  ] [n1-7] resolved host [10.88.88.232] to [10.88.88.232:9300]
[2019-06-17T08:10:35,206][TRACE][o.e.d.SeedHostsResolver  ] [n1-7] resolved host [10.88.88.233] to [10.88.88.233:9300]
[2019-06-17T08:10:35,209][TRACE][o.e.d.PeerFinder         ] [n1-7] probing resolved transport addresses [10.88.88.232:9300, 10.88.88.233:9300]
[2019-06-17T08:10:35,209][TRACE][o.e.d.PeerFinder         ] [n1-7] Peer{transportAddress=10.88.88.232:9300, discoveryNode=null, peersRequestInFlight=false} attempting connection
[2019-06-17T08:10:35,210][TRACE][o.e.d.PeerFinder         ] [n1-7] Peer{transportAddress=10.88.88.233:9300, discoveryNode=null, peersRequestInFlight=false} attempting connection
[2019-06-17T08:10:35,217][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.232:9300]] opening probe connection
[2019-06-17T08:10:35,220][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.233:9300]] opening probe connection
[2019-06-17T08:10:35,321][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.232:9300]] opened probe connection
[2019-06-17T08:10:35,324][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.233:9300]] opened probe connection
[2019-06-17T08:10:35,327][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.232:9300]] handshake successful: {n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}
[2019-06-17T08:10:35,334][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.233:9300]] handshake successful: {n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}
[2019-06-17T08:10:35,362][TRACE][o.e.d.PeerFinder         ] [n1-7] deactivating and setting leader to {n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{5vquvlmORQSb3ykj6ncjVg}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}
[2019-06-17T08:10:35,363][TRACE][o.e.d.PeerFinder         ] [n1-7] not active
[2019-06-17T08:10:35,391][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.232:9300]] full connection successful: {n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}
[2019-06-17T08:10:35,421][INFO ][o.e.c.s.MasterService    ] [n1-7] elected-as-master ([1] nodes joined)[{n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{5vquvlmORQSb3ykj6ncjVg}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 12, version: 68, reason: master node changed {previous [], current [{n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{5vquvlmORQSb3ykj6ncjVg}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}]}
[2019-06-17T08:10:35,427][TRACE][o.e.d.HandshakingTransportAddressConnector] [n1-7] [connectToRemoteMasterNode[10.88.88.233:9300]] full connection successful: {n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}
[2019-06-17T08:10:35,523][INFO ][o.e.c.s.ClusterApplierService] [n1-7] master node changed {previous [], current [{n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{5vquvlmORQSb3ykj6ncjVg}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}]}, term: 12, version: 68, reason: Publication{term=12, version=68}
[2019-06-17T08:10:35,598][INFO ][o.e.h.AbstractHttpServerTransport] [n1-7] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2019-06-17T08:10:35,599][INFO ][o.e.n.Node               ] [n1-7] started

PART 4

node n2-7

[2019-06-17T08:02:02,280][INFO ][o.e.n.Node               ] [n2-7] starting ...
[2019-06-17T08:02:02,453][INFO ][o.e.t.TransportService   ] [n2-7] publish_address {10.88.88.232:9300}, bound_addresses {10.88.88.232:9300}, {127.0.0.1:9300}
[2019-06-17T08:02:02,458][INFO ][o.e.b.BootstrapChecks    ] [n2-7] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-06-17T08:02:02,466][DEBUG][o.e.d.SeedHostsResolver  ] [n2-7] using max_concurrent_resolvers [10], resolver timeout [5s]
[2019-06-17T08:02:02,469][TRACE][o.e.d.PeerFinder         ] [n2-7] activating with nodes: 
   {n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}, local

[2019-06-17T08:02:02,472][TRACE][o.e.d.PeerFinder         ] [n2-7] probing master nodes from cluster state: nodes: 
   {n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}, local

[2019-06-17T08:02:02,473][TRACE][o.e.d.PeerFinder         ] [n2-7] startProbe(10.88.88.232:9300) not probing local node
[2019-06-17T08:02:02,499][TRACE][o.e.d.SeedHostsResolver  ] [n2-7] resolved host [10.88.88.231] to [10.88.88.231:9300]
[2019-06-17T08:02:02,504][TRACE][o.e.d.SeedHostsResolver  ] [n2-7] resolved host [10.88.88.232] to [10.88.88.232:9300]
[2019-06-17T08:02:02,504][TRACE][o.e.d.SeedHostsResolver  ] [n2-7] resolved host [10.88.88.233] to [10.88.88.233:9300]
[2019-06-17T08:02:02,505][TRACE][o.e.d.PeerFinder         ] [n2-7] probing resolved transport addresses [10.88.88.231:9300, 10.88.88.233:9300]
[2019-06-17T08:02:02,507][TRACE][o.e.d.PeerFinder         ] [n2-7] Peer{transportAddress=10.88.88.231:9300, discoveryNode=null, peersRequestInFlight=false} attempting connection
[2019-06-17T08:02:02,508][TRACE][o.e.d.PeerFinder         ] [n2-7] Peer{transportAddress=10.88.88.233:9300, discoveryNode=null, peersRequestInFlight=false} attempting connection
[2019-06-17T08:02:02,511][TRACE][o.e.d.HandshakingTransportAddressConnector] [n2-7] [connectToRemoteMasterNode[10.88.88.231:9300]] opening probe connection
[2019-06-17T08:02:02,526][TRACE][o.e.d.HandshakingTransportAddressConnector] [n2-7] [connectToRemoteMasterNode[10.88.88.233:9300]] opening probe connection
[2019-06-17T08:02:02,586][TRACE][o.e.d.PeerFinder         ] [n2-7] deactivating and setting leader to {n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}
[2019-06-17T08:02:02,587][TRACE][o.e.d.PeerFinder         ] [n2-7] not active
[2019-06-17T08:02:02,681][INFO ][o.e.c.s.MasterService    ] [n2-7] elected-as-master ([1] nodes joined)[{n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 10, version: 69, reason: master node changed {previous [], current [{n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}]}
[2019-06-17T08:02:02,690][DEBUG][o.e.d.PeerFinder         ] [n2-7] Peer{transportAddress=10.88.88.233:9300, discoveryNode=null, peersRequestInFlight=false} connection failed
org.elasticsearch.transport.ConnectTransportException: [][10.88.88.233:9300] connect_exception
        at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1299) ~[elasticsearch-7.0.1.jar:7.0.1]
        ....
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) ~[?:?]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.88.88.233:9300
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?]
        at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327) ~[?:?]
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
        ... 6 more
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?]
        at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327) ~[?:?]
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
        ... 6 more
[2019-06-17T08:02:02,789][TRACE][o.e.d.HandshakingTransportAddressConnector] [n2-7] [connectToRemoteMasterNode[10.88.88.231:9300]] opened probe connection
[2019-06-17T08:02:02,802][TRACE][o.e.d.HandshakingTransportAddressConnector] [n2-7] [connectToRemoteMasterNode[10.88.88.231:9300]] handshake successful: {n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{oufKSehNSU6EwG3M9o9yHw}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}
[2019-06-17T08:02:02,833][INFO ][o.e.c.s.ClusterApplierService] [n2-7] master node changed {previous [], current [{n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}]}, term: 10, version: 69, reason: Publication{term=10, version=69}
[2019-06-17T08:02:02,937][TRACE][o.e.d.HandshakingTransportAddressConnector] [n2-7] [connectToRemoteMasterNode[10.88.88.231:9300]] full connection successful: {n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{oufKSehNSU6EwG3M9o9yHw}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}
[2019-06-17T08:02:02,954][INFO ][o.e.h.AbstractHttpServerTransport] [n2-7] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2019-06-17T08:02:02,954][INFO ][o.e.n.Node               ] [n2-7] started
[2019-06-17T08:02:03,174][INFO ][o.e.l.LicenseService     ] [n2-7] license [c70436fa-0f86-45a0-a010-17fdcf48a005] mode [basic] - valid
[2019-06-17T08:02:03,188][INFO ][o.e.g.GatewayService     ] [n2-7] recovered [2] indices into cluster_state
[2019-06-17T08:02:03,484][TRACE][o.e.d.PeerFinder         ] [n2-7] not active
[2019-06-17T08:02:03,581][INFO ][o.e.c.r.a.AllocationService] [n2-7] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_1][0]] ...]).

PART 5

node n3-7

[2019-06-17T08:01:58,646][INFO ][o.e.p.PluginsService     ] [n3-7] no plugins loaded
[2019-06-17T08:02:01,273][DEBUG][o.e.d.z.ElectMasterService] [n3-7] using minimum_master_nodes [-1]
[2019-06-17T08:02:02,801][DEBUG][o.e.d.SettingsBasedSeedHostsProvider] [n3-7] using initial hosts [10.88.88.231, 10.88.88.232, 10.88.88.233]
[2019-06-17T08:02:02,830][INFO ][o.e.d.DiscoveryModule    ] [n3-7] using discovery type [zen] and seed hosts providers [settings]
[2019-06-17T08:02:03,558][INFO ][o.e.n.Node               ] [n3-7] initialized
[2019-06-17T08:02:03,559][INFO ][o.e.n.Node               ] [n3-7] starting ...
[2019-06-17T08:02:03,728][INFO ][o.e.t.TransportService   ] [n3-7] publish_address {10.88.88.233:9300}, bound_addresses {127.0.0.1:9300}, {10.88.88.233:9300}
[2019-06-17T08:02:03,734][INFO ][o.e.b.BootstrapChecks    ] [n3-7] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-06-17T08:02:03,745][DEBUG][o.e.d.SeedHostsResolver  ] [n3-7] using max_concurrent_resolvers [10], resolver timeout [5s]
[2019-06-17T08:02:03,749][TRACE][o.e.d.PeerFinder         ] [n3-7] activating with nodes: 
   {n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}, local

[2019-06-17T08:02:03,750][TRACE][o.e.d.PeerFinder         ] [n3-7] probing master nodes from cluster state: nodes: 
   {n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}, local

[2019-06-17T08:02:03,751][TRACE][o.e.d.PeerFinder         ] [n3-7] startProbe(10.88.88.233:9300) not probing local node
[2019-06-17T08:02:03,774][TRACE][o.e.d.SeedHostsResolver  ] [n3-7] resolved host [10.88.88.231] to [10.88.88.231:9300]
[2019-06-17T08:02:03,779][TRACE][o.e.d.SeedHostsResolver  ] [n3-7] resolved host [10.88.88.232] to [10.88.88.232:9300]
[2019-06-17T08:02:03,779][TRACE][o.e.d.SeedHostsResolver  ] [n3-7] resolved host [10.88.88.233] to [10.88.88.233:9300]
[2019-06-17T08:02:03,781][TRACE][o.e.d.PeerFinder         ] [n3-7] probing resolved transport addresses [10.88.88.231:9300, 10.88.88.232:9300]
[2019-06-17T08:02:03,782][TRACE][o.e.d.PeerFinder         ] [n3-7] Peer{transportAddress=10.88.88.231:9300, discoveryNode=null, peersRequestInFlight=false} attempting connection
[2019-06-17T08:02:03,784][TRACE][o.e.d.PeerFinder         ] [n3-7] Peer{transportAddress=10.88.88.232:9300, discoveryNode=null, peersRequestInFlight=false} attempting connection
[2019-06-17T08:02:03,789][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.231:9300]] opening probe connection
[2019-06-17T08:02:03,793][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.232:9300]] opening probe connection
[2019-06-17T08:02:03,845][TRACE][o.e.d.PeerFinder         ] [n3-7] deactivating and setting leader to {n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}
[2019-06-17T08:02:03,845][TRACE][o.e.d.PeerFinder         ] [n3-7] not active
[2019-06-17T08:02:03,924][INFO ][o.e.c.s.MasterService    ] [n3-7] elected-as-master ([1] nodes joined)[{n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 10, version: 70, reason: master node changed {previous [], current [{n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}]}
[2019-06-17T08:02:03,975][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.231:9300]] opened probe connection
[2019-06-17T08:02:04,011][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.232:9300]] opened probe connection
[2019-06-17T08:02:04,017][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.231:9300]] handshake successful: {n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{oufKSehNSU6EwG3M9o9yHw}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}
[2019-06-17T08:02:04,057][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.232:9300]] handshake successful: {n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}
[2019-06-17T08:02:04,069][INFO ][o.e.c.s.ClusterApplierService] [n3-7] master node changed {previous [], current [{n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}]}, term: 10, version: 70, reason: Publication{term=10, version=70}
[2019-06-17T08:02:04,073][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.231:9300]] full connection successful: {n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{oufKSehNSU6EwG3M9o9yHw}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}
[2019-06-17T08:02:04,112][TRACE][o.e.d.HandshakingTransportAddressConnector] [n3-7] [connectToRemoteMasterNode[10.88.88.232:9300]] full connection successful: {n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}
[2019-06-17T08:02:04,134][INFO ][o.e.h.AbstractHttpServerTransport] [n3-7] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2019-06-17T08:02:04,135][INFO ][o.e.n.Node               ] [n3-7] started
[2019-06-17T08:02:04,324][INFO ][o.e.l.LicenseService     ] [n3-7] license [c70436fa-0f86-45a0-a010-17fdcf48a005] mode [basic] - valid
[2019-06-17T08:02:04,331][INFO ][o.e.g.GatewayService     ] [n3-7] recovered [2] indices into cluster_state
[2019-06-17T08:02:04,612][INFO ][o.e.c.r.a.AllocationService] [n3-7] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana_1][0]] ...]).
[2019-06-17T08:02:04,756][TRACE][o.e.d.PeerFinder         ] [n3-7] not active

PART 6

On node n2-7 we can see

[2019-06-17T08:02:02,690][DEBUG][o.e.d.PeerFinder         ] [n2-7] Peer{transportAddress=10.88.88.233:9300, discoveryNode=null, peersRequestInFlight=false} connection failed
org.elasticsearch.transport.ConnectTransportException: [][10.88.88.233:9300] connect_exception
...
Caused by: java.net.ConnectException: Connection refused

But node n2-7 (10.88.88.232) can connect to 10.88.88.233 to port 9300.

telnet 10.88.88.233 9300
Trying 10.88.88.233...
Connected to 10.88.88.233.
Escape character is '^]'.

Do you have any idea how to solve this?

I tried to remove /var/lib/elasticsearch directory on all nodes and it started to work.. but in production mode this is not right solution.

What am I doing wrong?
Vasek

Thank you for the comprehensive information, it's very helpful. I see multiple nodes with the same node ID, R8nMMFC3Tq-LDEp4A6lr6w:

{n1-7}{R8nMMFC3Tq-LDEp4A6lr6w}{oufKSehNSU6EwG3M9o9yHw}{10.88.88.231}{10.88.88.231:9300}{xpack.installed=true}
{n2-7}{R8nMMFC3Tq-LDEp4A6lr6w}{3OH8MG9HTv66XE7XxceX3A}{10.88.88.232}{10.88.88.232:9300}{xpack.installed=true}
{n3-7}{R8nMMFC3Tq-LDEp4A6lr6w}{_Bu9AgMKQWCxfLu4oBtrAw}{10.88.88.233}{10.88.88.233:9300}{xpack.installed=true}

This means you've cloned the data paths between these nodes, which you shouldn't do. Instead, start all the nodes with empty data paths and they will form a cluster together.

1 Like

Yes. That's right. I cloned these nodes. I did not realize that. Thank you for you time David.

Now it works :wink:

10.88.88.231 22 77 75 3.78 2.83 1.18 m   - n1-7
10.88.88.232 17 56 80 2.41 1.27 0.51 mdi - n2-7
10.88.88.233 10 59 65 2.47 1.34 0.54 mdi * n3-7
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.