Node 1: 
cluster.name: my-application 
node.name: node-1 
node.master: true 
node.data: true 
network.host: 192.168.x.xxx 
network.publish_host: 192.168.x.xxx 
network.blind_host: 192.168.x.xxx 
discovery.zen.ping.unicast.hosts: ["192.168.x.xxx:9200", "192.168.y.yyy:9200"]
Node 2: 
cluster.name: my-application 
node.name: node-2 
node.master: false 
node.data: true 
network.host: 192.168.x.xxx 
network.publish_host: 192.168.x.xxx 
network.blind_host: 192.168.x.xxx 
discovery.zen.ping.unicast.hosts: ["192.168.x.xxx:9200", "192.168.y.yyy:9200"]
             
            
               
               
               
            
                
            
           
          
            
              
                dadoonet  
                (David Pilato)
               
              
                  
                    July 30, 2018,  8:43pm
                   
                   
              2 
               
             
            
              This is wrong:
discovery.zen.ping.unicast.hosts: ["192.168.x.xxx:9200", "192.168.y.yyy:9200"]
 
This should be:
discovery.zen.ping.unicast.hosts: ["192.168.x.xxx:9300", "192.168.y.yyy:9300"]
 
Or easier:
discovery.zen.ping.unicast.hosts: ["192.168.x.xxx", "192.168.y.yyy"]
 
             
            
               
               
               
            
            
           
          
            
            
              I tried that David, but it still does not work.
             
            
               
               
               
            
            
           
          
            
            
              Every node has to be in a different file, right?
             
            
               
               
               
            
            
           
          
            
              
                dadoonet  
                (David Pilato)
               
              
                  
                    July 30, 2018, 11:18pm
                   
                   
              5 
               
             
            
              Can you share the logs of both nodes, please? 
Please format them using the </> icon to have something like:
THIS IS A LOG LINE
ANOTHER LOG LINE
 
Thanks.
             
            
               
               
               
            
            
           
          
            
              
                zqc0512  
                (andy_zhou)
               
              
                  
                    July 31, 2018,  2:11am
                   
                   
              6 
               
             
            
              show the log 
two data node is't good .
             
            
               
               
               
            
            
           
          
            
              
                roshni  
                (R_C)
               
              
                  
                    July 31, 2018,  5:48am
                   
                   
              7 
               
             
            
              what is the error you are receiving ? 
and what is the value of #discovery .zen.minimum_master_nodes:
             
            
               
               
               
            
            
           
          
            
            
              Node 1 (master):
[2018-08-01T17:08:44,188][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1] parsed [0] roles from file [C:\Users\player\Documents\elasticsearch-6.3.2\elasticsearch-6.3.2\config\roles.yml] 
[2018-08-01T17:08:46,580][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/33744] [Main.cc@109] controller (64 bit): Version 6.3.2 (Build 903094f295d249) Copyright (c) 2018 Elasticsearch BV 
[2018-08-01T17:08:46,921][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security 
[2018-08-01T17:08:48,797][INFO ][o.e.d.DiscoveryModule    ] [node-1] using discovery type [zen] 
[2018-08-01T17:08:50,582][INFO ][o.e.n.Node               ] [node-1] initialized 
[2018-08-01T17:08:50,583][INFO ][o.e.n.Node               ] [node-1] starting ... 
[2018-08-01T17:08:51,192][INFO ][o.e.t.TransportService   ] [node-1] publish_address {192.168.0.X6:9300}, bound_addresses {192.168.0.X6:9300} 
[2018-08-01T17:08:51,323][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks 
[2018-08-01T17:08:54,452][INFO ][o.e.c.s.MasterService    ] [node-1] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {node-1}{iJpsgOjuRYyeZevWGh5Gwg}{G6Qulos4RLSgZSey1vVrxA}{192.168.0.X6}{192.168.0.X6:9300}{ml.machine_memory=8471482368, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} 
[2018-08-01T17:08:54,462][INFO ][o.e.c.s.ClusterApplierService] [node-1] new_master {node-1}{iJpsgOjuRYyeZevWGh5Gwg}{G6Qulos4RLSgZSey1vVrxA}{192.168.0.X6}{192.168.0.X6:9300}{ml.machine_memory=8471482368, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {node-1}{iJpsgOjuRYyeZevWGh5Gwg}{G6Qulos4RLSgZSey1vVrxA}{192.168.0.X6}{192.168.0.X6:9300}{ml.machine_memory=8471482368, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]]) 
[2018-08-01T17:08:54,599][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] publish_address {192.168.0.X6:9200}, bound_addresses {192.168.0.X6:9200} 
[2018-08-01T17:08:54,617][INFO ][o.e.n.Node               ] [node-1] started 
[2018-08-01T17:08:55,512][INFO ][o.e.c.s.ClusterSettings  ] [node-1] updating [xpack.monitoring.collection.enabled] from [false] to [true] 
[2018-08-01T17:08:56,585][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [node-1] Failed to clear cache for realms [ ] 
[2018-08-01T17:08:56,634][INFO ][o.e.l.LicenseService     ] [node-1] license [d99825dd-fc03-4946-946e-0a00d633908b] mode [basic] - valid 
[2018-08-01T17:08:56,652][INFO ][o.e.g.GatewayService     ] [node-1] recovered [15] indices into cluster_state 
[2018-08-01T17:09:01,041][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
Node 2: 
             
            
               
               
               
            
            
           
          
            
            
              Node 1 (master):
[2018-08-01T17:08:44,188][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1] parsed [0] roles from file [C:\Users\player\Documents\elasticsearch-6.3.2\elasticsearch-6.3.2\config\roles.yml] 
[2018-08-01T17:08:46,580][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/33744] [Main.cc@109] controller (64 bit): Version 6.3.2 (Build 903094f295d249) Copyright (c) 2018 Elasticsearch BV 
[2018-08-01T17:08:46,921][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security 
[2018-08-01T17:08:48,797][INFO ][o.e.d.DiscoveryModule    ] [node-1] using discovery type [zen] 
[2018-08-01T17:08:50,582][INFO ][o.e.n.Node               ] [node-1] initialized 
[2018-08-01T17:08:50,583][INFO ][o.e.n.Node               ] [node-1] starting ... 
[2018-08-01T17:08:51,192][INFO ][o.e.t.TransportService   ] [node-1] publish_address {192.168.0.X6:9300}, bound_addresses {192.168.0.X6:9300} 
[2018-08-01T17:08:51,323][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks 
[2018-08-01T17:08:54,452][INFO ][o.e.c.s.MasterService    ] [node-1] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {node-1}{iJpsgOjuRYyeZevWGh5Gwg}{G6Qulos4RLSgZSey1vVrxA}{192.168.0.X6}{192.168.0.X6:9300}{ml.machine_memory=8471482368, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} 
[2018-08-01T17:08:54,462][INFO ][o.e.c.s.ClusterApplierService] [node-1] new_master {node-1}{iJpsgOjuRYyeZevWGh5Gwg}{G6Qulos4RLSgZSey1vVrxA}{192.168.0.X6}{192.168.0.X6:9300}{ml.machine_memory=8471482368, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {node-1}{iJpsgOjuRYyeZevWGh5Gwg}{G6Qulos4RLSgZSey1vVrxA}{192.168.0.X6}{192.168.0.X6:9300}{ml.machine_memory=8471482368, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]]) 
[2018-08-01T17:08:54,599][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] publish_address {192.168.0.X6:9200}, bound_addresses {192.168.0.X6:9200} 
[2018-08-01T17:08:54,617][INFO ][o.e.n.Node               ] [node-1] started 
[2018-08-01T17:08:55,512][INFO ][o.e.c.s.ClusterSettings  ] [node-1] updating [xpack.monitoring.collection.enabled] from [false] to [true] 
[2018-08-01T17:08:56,585][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [node-1] Failed to clear cache for realms [ ] 
[2018-08-01T17:08:56,634][INFO ][o.e.l.LicenseService     ] [node-1] license [d99825dd-fc03-4946-946e-0a00d633908b] mode [basic] - valid 
[2018-08-01T17:08:56,652][INFO ][o.e.g.GatewayService     ] [node-1] recovered [15] indices into cluster_state 
[2018-08-01T17:09:01,041][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
Node 2: 
             
            
               
               
               
            
            
           
          
            
              
                zqc0512  
                (andy_zhou)
               
              
                  
                    August 2, 2018, 12:28am
                   
                   
              10 
               
             
            
              connect time out  the network not ok? 
telnet IP:9300  can connect?
             
            
               
               
               
            
            
           
          
            
              
                TimV  
                (Tim Vernum)
               
              
                  
                    August 2, 2018, 12:37am
                   
                   
              11 
               
             
            
              
From the error, my best guess is that one of these IP addresses is incorrect. But it's impossible for us to give you much advice since you're redacting them. 
Are the IP addresses on your private network really so sensitive that you need to make debugging 10 times harder by hiding them?
             
            
               
               
              1 Like 
            
            
           
          
            
              
                system  
                (system)
                  Closed 
               
              
                  
                    August 30, 2018, 12:50am
                   
                   
              12 
               
             
            
              This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.