This is all fine if I just use ES via the included Kibana container. However I need to access this cluster from external hosts. This becomes problematic because the nodes inside the cluster are exposed through their docker-internal IP address. The application uses this API call below to get the addresses, and then of course errors out.
How can I overcome this? I have tried exposing the 9200/9300 ports for all 3 nodes to different ports on the docker-host, and then adding a network.publish_host=172.16.0.146 environment setting to each node, but this results in three 1-node clusters.
Someone must have faced this one in the past... It looks like a very common scenario...
I set that as an environment variable in all 3 instances, and the only thing that changed in the output of the curl is the internal IP addresses that docker chose to assign to the containers.
My starting point was the vanilla yaml that is provided in the link I list above, i.e. without my two additional settings for network.publish_host and the port mapping of the 6 internal ports to the docker host.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.