How to run multiple client nodes on one host

Hello All,

Could someone explain how to run multiple client nodes on a single host?

Is this documented somewhere? I have been leafing through the docs without luck.

Thanks in advance,

Something like:

bin/elasticsearch --node.client=true &
bin/elasticsearch --node.client=true &
bin/elasticsearch --node.client=true &
bin/elasticsearch --node.client=true &

?

What is your use case?

Hi,

Thanks for the quick response.

So there is a cluster of six nodes, running on VMWare on six VMs on their own network.

There is also an application being developed and it would have two components as two client only nodes.
These two are in Docker.

I am trying to figure out how to run both client nodes on a single Docker machine.
Have been playing with all sorts of port mappings without luck.

This is why I wanted to know how to configure multiple nodes on one host, then apply that somehow to Docker.

Right now it only works if the port is mapped directly (-p 9300:9300).

These aren't on the same host from Elasticseach's perspective. You are responsible for telling them that they are on the same physical hardware with allocation awareness.

I imagine it'd be simpler to have the docker applications just talk to the cluster directly.

By default Elasticsearch will grab the next available port in the 9300-3399 range. You can lock that down with the transport.tcp.port setting. I imagine the second elasticsearch that you run gets a different port.

Hi,

Thanks a lot for the allocation awareness tip. I will apply that as they are spread between two hardware hosts.

My question is exactly this, I am trying to run two client nodes on a single docker machine in two separate containers.
The client nodes would be joining the cluster.

Right. I'm telling you two things:

  1. You probably don't need client nodes at all.
  2. If you really want them then one thing that should work is to give each docker node its own port and set the Elasticsearch config so each elasticsearch tries to pick up the forwarded port rather than the default range.

I'm not seeing the benefits to running multiple clients in a single container, over just having multiple containers?

Hi,

This is exactly what I am trying to achieve. multiple ES client only nodes in a Docker container each on a single Docker machine.

docker run clientnode1 -p 9300-9400:9300
docker run clientnode2 -p 9300-9400:9300
docker run clientnode3 -p 9300-9400:9300

The example above unfortunately does not work.

It only works when mapping the ports like -p 9300:9300. This hoever limits the number of nodes to one on a single Docker machine.

Even a mapping of -p 9301:9301 does not work.

Why do you need more than one client node on a single host?