SO... there is a clean way to resolve this. Although I must say the
Elasticsearch documentation is very very confusing (all these buzzwords
like cluster and zen discovery boggles my mind!)
Now, if you have 2 instances, one in port 9200, and the other in 9201. And
you want ALL the shards to be in 9200.
Run this command to disable allocation in the 9201 instance. You can change
persistent to transient if you want this change to not be permanent. I'd
keep it persistent so this doesn't ever happen again.
curl -XPUT localhost:9201/_cluster/settings -d '{
"persistent" : {
"cluster.routing.allocation.disable_allocation" : true
}
}'
- Now, run the command to MOVE all the shards in the 9201 instance to 9200.
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"move" :
{
"index" : "", "shard" : ,
"from_node" : "<ID OF 9201 node>", "to_node" : "<ID of 9200
node>"
}
}
]
}'
You need to run this command for every shard in the 9201 instance (the one
you wanna get rid of).
That's it!
On Wednesday, April 3, 2013 2:36:34 AM UTC-4, Sujoy Sett wrote:
Hi,
Your case sounds similar to some I faced several times.
By mistake started a new node, so at a certain instance there is one extra
node, and ES automatically starts moving and balancing data between the
nodes.
By the time I notice the extra node, some shards have already moved there,
and just shutting down the node might result in some shards being not
available, thus making the cluster red.
Two solution to the problem (which I follow):
- keep the new node up - increase replica count to have a copy of each
shard in at least one node except this extra one - now shut down this node
- make the node up after assigning certain tag value in the node - issue
command to exclude shards from this tag - in some time shards will move out
from this node - shutdown the node.
Not sure whether the same problem has occurred in your case, just thought
of sharing in case it helps.
On Wednesday, April 3, 2013 12:43:43 AM UTC+5:30, utkar...@gmail.comwrote:
I just stumbled on the same issue. I am evaluating and currently using ES
to index logs.
I started a new node by mistake with the same name so it formed a
cluster. I killed the new node but now the original node which indexes logs
has status=red
Is there a way I can fix this without deleting all that data?
Thanks,
-Utkarsh
On Monday, March 18, 2013 2:20:42 AM UTC-7, Clinton Gormley wrote:
On Sun, 2013-03-17 at 14:39 -0700, inZania wrote:
I know this is an old thread but I have to jump in:
I just had this happen on my production servers, where the status was
red
due to a system shutdown on one of my nodes, and I ended up being
forced to
delete 6GB of data in order to get the status to turn green again.
Very
frustrating
You shouldn't need to do this, but given that you haven't provided any
details about your cluster, or the problem that you saw, it's impossible
to provide advice
clint
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.