In my minikube cluster, my elasticsearch pods have the config:
cluster.name: elasticsearch-logs
node.name: ${HOSTNAME}
node.master: true
node.data: true
network.host: _site_
transport.tcp.port: 9300
http.port: 9200
http.enabled: true
http.cors.enabled: true
bootstrap.memory_lock: false
xpack.security.enabled: false
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping_timeout: 10s
discovery.zen.ping.unicast.hosts: ["es-0:9300", "es-1:9300", "es-2:9300"]
However when I use my pod names I get errors connecting the pods/setting up the 3 master nodes
[2017-10-17T15:56:35,102][INFO ][o.e.d.DiscoveryModule ] [es-0] using discovery type [zen]
[2017-10-17T15:56:37,953][INFO ][o.e.n.Node ] [es-0] initialized
[2017-10-17T15:56:37,953][INFO ][o.e.n.Node ] [es-0] starting ...
[2017-10-17T15:56:38,899][INFO ][o.e.t.TransportService ] [es-0] publish_address {172.17.0.3:9300}, bound_addresses {172.17.0.3:9300}
[2017-10-17T15:56:38,980][INFO ][o.e.b.BootstrapChecks ] [es-0] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-10-17T15:56:44,112][WARN ][o.e.d.z.UnicastZenPing ] [es-0] timed out after [5s] resolving host [es-1:9300]
[2017-10-17T15:56:44,114][WARN ][o.e.d.z.UnicastZenPing ] [es-0] timed out after [5s] resolving host [es-2:9300]
[2017-10-17T15:56:54,129][WARN ][o.e.d.z.ZenDiscovery ] [es-0] not enough master nodes discovered during pinging (found [[Candidate{node={es-0}{H8EUqY6ARq2tkcGXWXH8sQ}{GTwfoPpCSfyZVdu9MRc_ng}{172.17.0.3}{172.17.0.3:9300}{ml.max_open_jobs=10, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2017-10-17T15:56:59,130][WARN ][o.e.d.z.UnicastZenPing ] [es-0] timed out after [5s] resolving host [es-1:9300]
[2017-10-17T15:56:59,131][WARN ][o.e.d.z.UnicastZenPing ] [es-0] timed out after [5s] resolving host [es-2:9300]
[2017-10-17T15:57:09,090][WARN ][o.e.n.Node ] [es-0] timed out while waiting for initial discovery state - timeout: 30s
[2017-10-17T15:57:09,127][INFO ][o.e.h.n.Netty4HttpServerTransport] [es-0] publish_address {172.17.0.3:9200}, bound_addresses {172.17.0.3:9200}
[2017-10-17T15:57:09,127][INFO ][o.e.n.Node ] [es-0] started
[2017-10-17T15:57:09,132][WARN ][o.e.d.z.ZenDiscovery ] [es-0] not enough master nodes discovered during pinging (found [[Candidate{node={es-0}{H8EUqY6ARq2tkcGXWXH8sQ}{GTwfoPpCSfyZVdu9MRc_ng}{172.17.0.3}{172.17.0.3:9300}{ml.max_open_jobs=10, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2017-10-17T15:57:09,135][WARN ][o.e.d.z.UnicastZenPing ] [es-0] failed to resolve host [es-1:9300]
java.net.UnknownHostException: es-1
at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[?:1.8.0_141]
at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_141]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_141]
at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:908) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:863) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:691) ~[elasticsearch-5.6.3.jar:5.6.3]
at org.elasticsearch.discovery.zen.UnicastZenPing.lambda$null$0(UnicastZenPing.java:212) ~[elasticsearch-5.6.3.jar:5.6.3]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_141]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.3.jar:5.6.3]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
[2017-10-17T15:57:09,142][WARN ][o.e.d.z.UnicastZenPing ] [es-0] failed to resolve host [es-2:9300]
java.net.UnknownHostException: es-2
But when I hardcode the Pod IP (172.17.0.3:9300, 172.17.4:9300...) It seems to connect fine. Is there anything extra that i have to add when I want to use the pod names? Thanks!