Should I publish multiple node HTTP ports when running all Elasticsearch nodes inside one Docker container?

Hi everyone,

I'm building a multi-node Elasticsearch cluster inside a single Docker container for learning purposes. Instead of using the official Docker image, I’m going old-school: manually unpacking the .tar.gz binary from elastic.co and launching each node manually via CLI inside the container.

Each node (node1, node2, node3) has its own directory and config, listens on different HTTP ports (9200, 9201, 9202), and the cluster is working great internally. I only published node1's port with -p 9200:9200. The other nodes are running internally, their ports are not exposed.

Everything seems to function fine:

  • Nodes form the cluster
  • /_cat/nodes returns all 3 nodes
  • Kibana connects through localhost:9200

But here’s the core of my question:

Do I need to publish (Docker -p) the HTTP ports of the other nodes (9201, 9202), or is exposing only node1 enough?

I understand that Elasticsearch routes queries internally and node1 can act as the coordinating node. But are there real-world cases (monitoring, direct REST calls, debugging, etc.) that would require those other node HTTP ports to be reachable from outside the container?

I'm trying to keep things simple and avoid publishing more ports if it's unnecessary.

Also... side note: I tried asking this on StackOverflow and got steamrolled by folks who mistook this for a "what's the difference between EXPOSE and -p" question — even though I very clearly explained it’s about application-level access, not Dockerfile metadata.

If you want a good laugh (or a facepalm), here’s the closed question:
:link: Do I need to publish (-p) the HTTP ports of multiple Elasticsearch nodes running inside the same Docker container? - Stack Overflow

Thanks in advance for any guidance from fellow Elastic pros! :raising_hands:

If only the HTTP endpoint of one node is exposed, your entire cluster will be unavailable for clients, including Kibana, if this node has some issue.

You can do a quick test, start your cluster and stop the node 1 container, you will see that you cannot access Kibana anymore.

The two remaining nodes will still work as they communicated using port 9300 internally, but any external clients will be unable to access the cluster.

1 Like

This expression is hiding some details. About what are you trying to learn ? Docker? Linux? Containers? Scripting? Elasticsearch?

You don’t actually need expose any ports. You can access a shell inside the Container and learn away with curl or other shell/CLI tools.

IF your intention is to learn more about Elasticsearch, then this slightly unusual way of setting it up isn’t (in my view) adding that much value in that specific learning exercise. Save the intellectual challenge. Eg what are you learning about cluster resilience or high availability?

Certainly should you ever wish to run some mission critical production cluster, you probably wouldn’t want to do it this way. But people (successfully) build really cool stuff in weird ways, so I’m not knocking your approach.

1 Like

++ what @leandrojmp said.

There are no real-world cases where it makes sense to run multiple nodes in a single container in the first place. But it is a great learning exercise. You can learn about handling node failures by exposing multiple ports and having clients fail over to a different endpoint if the first one becomes unreachable.

1 Like

I see, so exposing multiple nodes are required for HA environments...
Thank you, that cleared things up.

So in scenarios where there are multiple nodes, do I need or don't to expose those other nodes? That is what i'm trying to understand. What is the practical benefit and WHY would I want to expose other nodes? Look at @leandrojmp comment, do you have anything to add to his answer?

Nice. Yes, thank you. You and him got the point, thanks for adding to his answer.

I think @DavidTurner takes a slightly different view to me, to which he is entitled. and it's only a subtle difference anyways which is not worthy of much emphasis.

I agreed, and always agreed, with what @leandrojmp wrote on the exposed ports point. Nothing to add.

I simply didn't know where you are starting from, and what you hoped to achieve, which is why I asked. Plenty of people (me included) just use a single instance, a cluster of one, for learning certain Elastic-related things.

Good luck with your learnings.

1 Like

There is no requirement for exposing the http endpoint of all nodes, if you want you can expose just one of the nodes, but if this node is offline your clients will then lose access to it.

It is pretty common to have a cluster accessible only by a couple of nodes, like the hot/ingest nodes of the cluster.

For example, I've managed a 25 cluster nodes and all clients only communicate with the 4 hot nodes, only these 4 nodes were used in Kibana and Logstash configuration for example.

3 Likes