[root@frghcslnetv12 data]# tail /var/log/elasticsearch/network-logs.log
[2018-06-04T12:15:42,440][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:15:45,466][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:15:48,495][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:16:06,535][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:16:10,549][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:16:13,556][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:16:17,638][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:16:27,675][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:16:31,688][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
[2018-06-04T12:16:49,729][INFO ][o.e.d.z.ZenDiscovery ] [9hSWSDU] failed to send join request to master [{network-1}{M3r-tbp1QuqOQSj5UM6Ehw}{MdvaRbsfRty_dAHM8jKmkQ}{172.16.250.29}{172.16.250.29:9300}], reason [RemoteTransportException[[network-1][172.16.250.29:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[9hSWSDU][172.16.250.30:9300] connect_exception]; nested: IOException[No route to host: 172.16.250.30/172.16.250.30:9300]; nested: IOException[No route to host]; ]
Are the two hosts on the same network, is the a firewall between them?
ifconfig
Are there any iptables rules?
iptables -nvL
hi jamesspi i am facing problem in elasticsearch could you help me please?
They are on the same network :
[root@frghcslnetv12 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.250.30 netmask 255.255.255.0 broadcast 172.16.250.255
inet6 fe80::250:56ff:feb6:71fd prefixlen 64 scopeid 0x20<link>
ether 00:50:56:b6:71:fd txqueuelen 1000 (Ethernet)
RX packets 38865 bytes 33083427 (31.5 MiB)
RX errors 0 dropped 761 overruns 0 frame 0
TX packets 10069 bytes 1607764 (1.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 5860 bytes 508660 (496.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5860 bytes 508660 (496.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 52:54:00:94:2b:d4 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@frghcslnetv12 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
4262 1153K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
65 4624 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
2449 147K INPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
2449 147K INPUT_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
2449 147K INPUT_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
2449 147K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_direct all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_IN_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_IN_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_OUT_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FORWARD_OUT_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT 5084 packets, 693K bytes)
pkts bytes target prot opt in out source destination
5084 693K OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD_IN_ZONES (1 references)
pkts bytes target prot opt in out source destination
0 0 FWDI_public all -- eth0 * 0.0.0.0/0 0.0.0.0/0 [goto]
0 0 FWDI_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]
Chain FORWARD_IN_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination
Chain FORWARD_OUT_ZONES (1 references)
pkts bytes target prot opt in out source destination
0 0 FWDO_public all -- * eth0 0.0.0.0/0 0.0.0.0/0 [goto]
0 0 FWDO_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto]
Chain FORWARD_OUT_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination
Chain FORWARD_direct (1 references)
pkts bytes target prot opt in out source destination
Chain FWDI_public (2 references)
pkts bytes target prot opt in out source destination
0 0 FWDI_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FWDI_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FWDI_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
Chain FWDI_public_allow (1 references)
pkts bytes target prot opt in out source destination
Chain FWDI_public_deny (1 references)
pkts bytes target prot opt in out source destination
Chain FWDI_public_log (1 references)
pkts bytes target prot opt in out source destination
Chain FWDO_public (2 references)
pkts bytes target prot opt in out source destination
0 0 FWDO_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FWDO_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 FWDO_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FWDO_public_allow (1 references)
pkts bytes target prot opt in out source destination
Chain FWDO_public_deny (1 references)
pkts bytes target prot opt in out source destination
Chain FWDO_public_log (1 references)
pkts bytes target prot opt in out source destination
Chain INPUT_ZONES (1 references)
pkts bytes target prot opt in out source destination
2449 147K IN_public all -- eth0 * 0.0.0.0/0 0.0.0.0/0 [goto]
0 0 IN_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]
Chain INPUT_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination
Chain INPUT_direct (1 references)
pkts bytes target prot opt in out source destination
Chain IN_public (2 references)
pkts bytes target prot opt in out source destination
2449 147K IN_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
2449 147K IN_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
2449 147K IN_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
... etc
The node 1 is on : 172.16.250.29
The node 2 is on : 172.16.250.30
@asalma, which is the node giving you issues? .29 or .30?
Also, based on your iptables output, I am assuming you are using firewalld? If so, can you run:
firewall-cmd --get-active-zones
@Hamza_Dhahri, just open a new topic with your question, someone will pick it up
The ".30"
[root@frghcslnetv12 ~]# firewall-cmd --get-active-zones
public
interfaces: eth0
Ok, on both nodes, please add:
firewall-cmd --zone=public --add-port=9200/tcp --permanent
firewall-cmd --zone=public --add-port=9300/tcp --permanent
firewall-cmd --reload
Restart elasticsearch and try again.
I could do this on : ".30"
but on the ".29"
I got this :
[root@frghcslnetv10 ~]# firewall-cmd --get-active-zones
FirewallD is not running
[root@frghcslnetv10 ~]# firewall-cmd --zone=public --add-port=9200/tcp --permanent
FirewallD is not running
Interesting!
Can you run iptables -nvL
on .29?
It workes
[root@frghcslnetv12 ~]# curl http://localhost:9200/_cat/nodes?pretty
172.16.250.30 53 98 4 1.28 0.55 0.24 mdi - network-2
172.16.250.29 73 98 98 7.66 8.35 8.30 mdi * network-1
[root@frghcslnetv10 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 1 packets, 73 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Great! You had a local firewall on .30, but not .29.
Thank you a lot @jamesspi
You're welcome! Please close the topic if you're ready
Please one last question :
How can I know that this worked, how can I see this on kibana ?
My kibana and logstash are on : ".29"
Why i can't see the "network-2"
Hey @asalma,
You can run:
GET _cat/nodes?v
Or, even better, install the free version of x-pack to get monitoring set up (you can monitor all aspects of the cluster visually, very very helpful)
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.250.29 59 98 93 8.15 7.88 6.64 mdi * network-1
The cluster can't work without the X-pack ?
Yes, it can work of course.
You should have both nodes listed there if it is still working from before..is elasticsearch still running on .30?