We have a little cluster of 2 nodes, hosting 4 indexes of about 1.5M
documents each, replicated on both nodes.
Those 2 nodes are on VPS that are stored on the same physical host. As it
represents a single point of failure, we have decided to start a new VPS on
a different host.
What is the correct procedure to add the new node to the cluster, get the
indexes replicated in it, and then removed one of the older node?
We don't use mutlicast, so I imagine that I can add the node the the
unicast list in the config file, but how I can be sure that it will not
fail all my cluster when I will restart elasticsearch?
Those nodes are online in production so it's a bit touchy for us to take
any risk with it.
If this is production you really want an odd number of nodes to reduce
potential split brain issues.
However in your case, just add the new node to the cluster, let it
replicate across, then shutdown the node you no longer want. Any impact
will be minimal.
We have a little cluster of 2 nodes, hosting 4 indexes of about 1.5M
documents each, replicated on both nodes.
Those 2 nodes are on VPS that are stored on the same physical host. As it
represents a single point of failure, we have decided to start a new VPS on
a different host.
What is the correct procedure to add the new node to the cluster, get the
indexes replicated in it, and then removed one of the older node?
We don't use mutlicast, so I imagine that I can add the node the the
unicast list in the config file, but how I can be sure that it will not
fail all my cluster when I will restart elasticsearch?
Those nodes are online in production so it's a bit touchy for us to take
any risk with it.
We have a little cluster of 2 nodes, hosting 4 indexes of about 1.5M
documents each, replicated on both nodes.
Those 2 nodes are on VPS that are stored on the same physical host. As it
represents a single point of failure, we have decided to start a new VPS on
a different host.
What is the correct procedure to add the new node to the cluster, get the
indexes replicated in it, and then removed one of the older node?
We don't use mutlicast, so I imagine that I can add the node the the
unicast list in the config file, but how I can be sure that it will not
fail all my cluster when I will restart elasticsearch?
Those nodes are online in production so it's a bit touchy for us to take
any risk with it.
If this is production you really want an odd number of nodes to reduce potential split brain issues.
However in your case, just add the new node to the cluster, let it replicate across, then shutdown the node you no longer want. Any impact will be minimal.
On 24 July 2014 00:21, Pierre-Vincent Ledoux pvledoux@gmail.com wrote:
Hi,
We have a little cluster of 2 nodes, hosting 4 indexes of about 1.5M documents each, replicated on both nodes.
Those 2 nodes are on VPS that are stored on the same physical host. As it represents a single point of failure, we have decided to start a new VPS on a different host.
What is the correct procedure to add the new node to the cluster, get the indexes replicated in it, and then removed one of the older node?
We don't use mutlicast, so I imagine that I can add the node the the unicast list in the config file, but how I can be sure that it will not fail all my cluster when I will restart elasticsearch?
Those nodes are online in production so it's a bit touchy for us to take any risk with it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.