Today, I accidentally started another node on a different machine on the
same LAN which resulted in my first ES instance adding this new node to the
cluster. I killed this second node with SIGKILL and restarted my original
ES instance. However, on checking the health status of the various
indices, a lot of them show red. The one's that show a red status have no.
of replicas set to 0. The ones that are green had the number of replicas
set to 1. Is there any way I can cleanly get the health back to Green
without deleting and re-indexing. I am using the default setting of having
5 shards per index. Also, for each index that has its' status as red has
exactly one shard that is unassigned.
Restart the other node, let it join and then bring the cluster back to
green.
Then either 1) add replicas so each node has the data and remove the old
node and set replicas to 0, or 2) disable allocation and move the shards
off the other node to the main one manually.
Today, I accidentally started another node on a different machine on the
same LAN which resulted in my first ES instance adding this new node to the
cluster. I killed this second node with SIGKILL and restarted my original
ES instance. However, on checking the health status of the various
indices, a lot of them show red. The one's that show a red status have no.
of replicas set to 0. The ones that are green had the number of replicas
set to 1. Is there any way I can cleanly get the health back to Green
without deleting and re-indexing. I am using the default setting of having
5 shards per index. Also, for each index that has its' status as red has
exactly one shard that is unassigned.
Thank you for the solution ! A quick clarification though. So I am assuming
that on restarting the second node, the cluster will automatically come
back to green, right? Also, is it possible that instead of restarting the
second node(which was accidental and not really needed), I can force shard
allocation of unassigned shard to my original node?
Thank you for the solution ! A quick clarification though. So I am
assuming that on restarting the second node, the cluster will automatically
come back to green, right? Also, is it possible that instead of restarting
the second node(which was accidental and not really needed), I can force
shard allocation of unassigned shard to my original node?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.