How to identify unallocated shards


(Rahul Sharma) #1

Hi,

I have a production ES (1.9.4) deployed on AWS.
Running with 2 nodes 0 replica. It has the default setting of 5 shards
while creation of new index.

For some reason the master node went down and the other got elected as
master.

Then I stopped both and restarted the old master first and then the second
one.
It allocate most of the shards except 4. And now the status is RED.

How do I identify the unallocated shards? And what can be done with those?

Would greatly appreciate your inputs.

Thanks
Rahul

--


(Chuck McKenzie) #2

Unallocated shards are listed at http://node:9200/_cluster/state in the
routing_nodes unassigned section.

What to do to fix it depends a lot on why they're remaining unassigned.
What do the logs say there?

On Monday, November 5, 2012 6:23:00 AM UTC-6, Rahul Sharma wrote:

Hi,

I have a production ES (1.9.4) deployed on AWS.
Running with 2 nodes 0 replica. It has the default setting of 5 shards
while creation of new index.

For some reason the master node went down and the other got elected as
master.

Then I stopped both and restarted the old master first and then the second
one.
It allocate most of the shards except 4. And now the status is RED.

How do I identify the unallocated shards? And what can be done with those?

Would greatly appreciate your inputs.

Thanks
Rahul

--


(Rahul Sharma) #3

Thanks a lot Chuck for the tip. I finally able to identify the unallocated
shards.

I did stopped a node just after starting it (before status going green -
basically it was still trying to allocate shards during that time.). I did
this because I realized after starting the very node, that I started a
wrong node first for which it got elected as master. Not sure if this can
be the reason.
I can't see any useful logs that points me to the cause.

These shards are from one index and I can re-build it.
Is there any easy way to re-assign these shards or remove them permanently
so that I get my green status back?

Thanks
Rahul

On Mon, Nov 5, 2012 at 11:32 PM, Chuck McKenzie redchuck@gmail.com wrote:

Unallocated shards are listed at http://node:9200/_cluster/state in the
routing_nodes unassigned section.

What to do to fix it depends a lot on why they're remaining unassigned.
What do the logs say there?

On Monday, November 5, 2012 6:23:00 AM UTC-6, Rahul Sharma wrote:

Hi,

I have a production ES (1.9.4) deployed on AWS.
Running with 2 nodes 0 replica. It has the default setting of 5 shards
while creation of new index.

For some reason the master node went down and the other got elected as
master.

Then I stopped both and restarted the old master first and then the
second one.
It allocate most of the shards except 4. And now the status is RED.

How do I identify the unallocated shards? And what can be done with those?

Would greatly appreciate your inputs.

Thanks
Rahul

--

--


(Igor Motov) #4

If you are going to rebuilt the index, you can just delete the index with
unallocated shards. All shards, including unallocated, will be deleted when
the index is deleted.

On Monday, November 5, 2012 3:22:04 PM UTC-5, Rahul Sharma wrote:

Thanks a lot Chuck for the tip. I finally able to identify the unallocated
shards.

I did stopped a node just after starting it (before status going green -
basically it was still trying to allocate shards during that time.). I did
this because I realized after starting the very node, that I started a
wrong node first for which it got elected as master. Not sure if this can
be the reason.
I can't see any useful logs that points me to the cause.

These shards are from one index and I can re-build it.
Is there any easy way to re-assign these shards or remove them permanently
so that I get my green status back?

Thanks
Rahul

On Mon, Nov 5, 2012 at 11:32 PM, Chuck McKenzie <redc...@gmail.com<javascript:>

wrote:

Unallocated shards are listed at http://node:9200/_cluster/state in the
routing_nodes unassigned section.

What to do to fix it depends a lot on why they're remaining unassigned.
What do the logs say there?

On Monday, November 5, 2012 6:23:00 AM UTC-6, Rahul Sharma wrote:

Hi,

I have a production ES (1.9.4) deployed on AWS.
Running with 2 nodes 0 replica. It has the default setting of 5 shards
while creation of new index.

For some reason the master node went down and the other got elected as
master.

Then I stopped both and restarted the old master first and then the
second one.
It allocate most of the shards except 4. And now the status is RED.

How do I identify the unallocated shards? And what can be done with
those?

Would greatly appreciate your inputs.

Thanks
Rahul

--

--


(system) #5