After Mondays AWS-EC2 failbasket I ended up with splitbrain and a number of
rather strange configurations. I've cleaned out those with a judicious use
of kill and restarting the ES nodes.
My index has 5 shards and one replica on six data nodes. After my kill
fiesta all five shards were yellow or red. Four of them have come back to
green and are accepting writes again. The fifth shard has stubbornly
remained at yellow even after closing and opening the index. It claims to
have one active shard and one initializing shard. Its been initializing for
about 20 hours now and I don't think its going to finish. When the other
shards were initializing there was a tremendous amount of disk activity, now
there is nothing spectacular going on.
- how can I kick the last initializing shard to work? will an increase in
shards cause it to rebalance and fix itself or will that only cause more
- two of the six data nodes have no data on them. I'm not entirely what
they're doing but I'd like to get them involved. Any suggestions on how to
push indicies onto them? Possibly at the same time as fixing the busted