Hi guys, I would really appreciate some help understanding what's going
down with shard allocation in this case:
Elasticsearch version: 1.4.4
We had 3 nodes with 1 shard and 1 replica per index (so net 2 copies of
everything). 1 node went down and the cluster went red. It started to
reallocate shards as expected and there were originally ~50 unallocated
shards with 15 primary and the rest replicas.
It's been a few hours now and there are still 15 outstanding shards that
are all primary that don't seem to be getting re-allocated. I thought this
would be a pretty standard scenario so I was really hoping I wouldn't need
to manually walk through and re-allocate the primary shards, but I'm not
sure what else to try at this point to get back to green. Any pointers
would be really appreciated. Here is some of the relevant seeming bits
folks asked about on the IRC:
In the ES logs for the unallocated index names there are lines along the
[2015-04-29 22:08:22,803][DEBUG][action.admin.indices.stats] [Agent Axis]
[webaccesslogs-2015.04.24], node[-r2iQnH4R-mcUy4NicCB5g], [P],
s[STARTED]: failed to execute
"Jean-Paul Beaubier" is the node that went down
shards disk.used disk.avail disk.total disk.percent host ip
420 21.2gb 77gb 98.3gb 21 ip-10-234-164-148
10.234.164.148 Agent Axis
420 41gb 57.2gb 98.3gb 41 ip-10-218-145-237
10.218.145.237 Ebon Seeker
I'm trying to understand why it's stuck in this state given there is no
other info in the logs as far as I can tell about why the shards can't be
allocated. Shouldn't the replicas just be promoted in place to new
primaries and then new replicas created on the other node?
Thanks and regards -- Alex
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9adda07d-88b0-4fa2-805b-37d4739d6f1a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.