I had 4 nodes as part of a cluster. Now, I want to move one of the node out as my staging server; but at the same time want to keep it's data/indices intact.
I checked the data folder and it had all the indices/_state etc.
Then, I did shutdown the node (stopped ES process) and changed the config to make it a stand alone node.
When I restarted the elasticsearch, there were no indices!! The data is still there, but elastic search is not able to read them. Count of indices is zero.
How do I recover the data/indices?
Did you reconfigure the node to be part of a different cluster name? If so, you'll need to go to your data.path and rename the directory to match your 'standalone' cluster name. Take note though that this will only contain the shards this node had at the time you removed it from the main cluster, it won't have the complete index unless you did some work to move all the shards to it before you removed it.
Yup. The cluster name is same. Node name is also same.
Only made the new node as master node and disable discovery options.
Would this node have the namespace related to indices (if there is any such thing at all)?
It is okay even if the node has got only a subset of data (or a single shard).
This was not a master.eligible node. Would that make any difference?
If it wasn't master eligible before, it wouldn't have knowledge of the cluster state and therefore what indices should be present. I suspect this will be part of your problem yes.
Thanks Steve. Could you point me to proper documentation to make a node master eligible?
What all configs should I take care for that?
Make it discoverable by the other nodes again, and form part of the larger cluster while it is master eligible. It should then get a copy of the cluster state.
I'm not 100% sure what you're trying to achieve here though. If you want a separate staging cluster, I would have thought bringing up the single node under a new cluster name would achieve just that. If you want it to contain some data, just use the re-index API to read in an index from the remaining 3-node cluster. I think having 2 clusters with the same cluster name is only asking for trouble down the line.
My cluster is on very old version - 1.3.4 - which does not support re-index API
The goal is to setup a staging cluster with some actual data and then upgrade it to the latest version of elasticsearch.
We have some mapping conflicts etc. So, I want to try and resolve them out before upgrading the production cluster.
Yeah. That would have been easier. But, since the node was already part of cluster, I thought that would not be required.
Every node has the cluster state (master-eligible or not). Removing a node from a cluster and trying to use it in a "new" cluster is not safe and can get you into problems. Specially if the cluster names are the same and you at some point forget that.
Ignoring all the best practices and just talking about practical stuff, every node is master-eligible by default so you don't need to set it. Also, you can use clients (e.g. python) to execute the reindex for you. Assuming you are going to be testing your mapping and will have to reindex many times, that would be the best approach.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.