Hi everyone,
I'm building a multi-node Elasticsearch cluster inside a single Docker container for learning purposes. Instead of using the official Docker image, I’m going old-school: manually unpacking the .tar.gz
binary from elastic.co
and launching each node manually via CLI inside the container.
Each node (node1, node2, node3) has its own directory and config, listens on different HTTP ports (9200, 9201, 9202), and the cluster is working great internally. I only published node1's port with -p 9200:9200
. The other nodes are running internally, their ports are not exposed.
Everything seems to function fine:
- Nodes form the cluster
/_cat/nodes
returns all 3 nodes- Kibana connects through
localhost:9200
But here’s the core of my question:
Do I need to publish (Docker
-p
) the HTTP ports of the other nodes (9201, 9202), or is exposing only node1 enough?
I understand that Elasticsearch routes queries internally and node1 can act as the coordinating node. But are there real-world cases (monitoring, direct REST calls, debugging, etc.) that would require those other node HTTP ports to be reachable from outside the container?
I'm trying to keep things simple and avoid publishing more ports if it's unnecessary.
Also... side note: I tried asking this on StackOverflow and got steamrolled by folks who mistook this for a "what's the difference between EXPOSE and -p" question — even though I very clearly explained it’s about application-level access, not Dockerfile metadata.
If you want a good laugh (or a facepalm), here’s the closed question:
Do I need to publish (-p) the HTTP ports of multiple Elasticsearch nodes running inside the same Docker container? - Stack Overflow
Thanks in advance for any guidance from fellow Elastic pros!