I have set up 2 nodes. Both are in the same machine (VM). The first node is in my host os (windows server 2016) and the second node is in the docker container. The nodes work well individually, but, failed to be in the same cluster.
I ran the first node in my host OS server before running the node in the docker container
Can a docker container connect to a cluster that is in the host os?
Is my config correct?
Here is my config for the first node that is in the host os (WS 2016)
This is how I run my docker image docker run -d -p 9201:9200 -p 9301:9300 --name elasticsearch <imageid>
Really appreciate your help!
EDIT
I ran this command instead docker run -d -p 9200:9201 -p 9300:9301 --name elasticsearch <imageid>
And received this error Error response from daemon: failed to create endpoint elasticsearch on network nat: HNS failed with error : Unspecified error.
Besides Docker network diagnostics on Windows (which I'm probably not going to be the best help with!), can I ask why are you trying to run it this way? Usually people put all the nodes in Docker. docker-compose is pretty good for a small local setup. See our own example docker-compose.yml for a multi-node setup. Feels like it'd simplify your setup and save you headaches.
Other than that, I know that with default Docker settings, 127.0.0.1 from within a container literally just means 'this container', at least on Linux. So I'm not sure your Docker container is going to see the host ES.
Additionally, if your host ES is configured to use http.port: 9200, running Docker with -p 9201:9200 doesn't seem like it's going to work. You can only give the port to one program, they can't share. If host-os-ES has 9200, how is docker-ES going to also take 9200? (and 9300 in the same way). I'd try running Docker with -p 9201:9201 -p 9301:9301.
Then I'd check manually if both environments can see each other:
Can you manually connect to localhost:9201 from the host, and is it the right Elasticsearch?
Can you manually connect to localhost:9200 from Docker, and is it the right Elasticsearch?
(Also not directly related to operating the cluster, but useful for diagnostics: at this point you should be able to connect to both 9200 and 9201 from the host, given the port forwards)
... you can probably see why I suggest running all of it inside Docker if you need Docker :).
We are still exploring. Currently, we only have one server and we do not have funds to buy another server. So we are trying to use the docker container to be our extra server to host extra master node. May I know if the set up we are doing is possible/realistic though?
Yes, this can be done once I run the container in the docker and only after I
edited the elasticsearch.yml file from cluster.initial_master_nodes: ["node-1", "node-2"] to cluster.initial_master_nodes: ["node-2"]
Not possible as ES is already running as a Service on host os.
May I know if this set up is realistic? Thank you!
Currently, we only have one server and we do not have funds to buy another server. So we are trying to use the docker container to be our extra server to host extra master node.
Ah, I see. Very generally speaking, adding more nodes gets you:
higher availability - if a server falls down, the others take over while it recovers
better resource utilisation - you can now utilise more vCPUs, RAM and disk if spread across multiple machines to do your data operations.
Splitting the resources of one VM won't help with either goal. If you just kept is as a single node Elasticsearch cluster, it'd work just as well (if not better ) and have a lot less complexity. Just accept that you have less resources for now and plan around that - make regular snapshots, have quick & tested disaster recovery procedures, etc.
May I know if this set up is realistic? Thank you!
I would not do what you're doing. If you really really want to set up a cluster, use 3 (not 2) nodes (always use an odd number), and put them all in Docker or all outside Docker (not mixed).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.