Thanks a lot Chuck for the tip. I finally able to identify the unallocated
I did stopped a node just after starting it (before status going green -
basically it was still trying to allocate shards during that time.). I did
this because I realized after starting the very node, that I started a
wrong node first for which it got elected as master. Not sure if this can
be the reason.
I can't see any useful logs that points me to the cause.
These shards are from one index and I can re-build it.
Is there any easy way to re-assign these shards or remove them permanently
so that I get my green status back?
On Mon, Nov 5, 2012 at 11:32 PM, Chuck McKenzie email@example.com wrote:
Unallocated shards are listed at http://node:9200/_cluster/state in the
routing_nodes unassigned section.
What to do to fix it depends a lot on why they're remaining unassigned.
What do the logs say there?
On Monday, November 5, 2012 6:23:00 AM UTC-6, Rahul Sharma wrote:
I have a production ES (1.9.4) deployed on AWS.
Running with 2 nodes 0 replica. It has the default setting of 5 shards
while creation of new index.
For some reason the master node went down and the other got elected as
Then I stopped both and restarted the old master first and then the
It allocate most of the shards except 4. And now the status is RED.
How do I identify the unallocated shards? And what can be done with those?
Would greatly appreciate your inputs.