Elasticsearch automatically condensed my shards?


(x0ne) #1

I am running a single node (15GB RAM, half allocated on the heap) with ES
(1.2.1) and had allocated two indexes with 16 shards a piece with no
replicas. Health always reported my "cluster" in the yellow due to a lack
of replicas, but aside from that, all was well until this morning.

In total, I am hovering around 65M documents and when I checked the status
of ES this morning, it was still yellow, only I now had 5 shards per index.
All the documents appear to be in place (counts are still the same in
marvel, queries return proper totals), but how did my shard allocation
change?

I had chosen 16 shards to allow for simple routing based on document IDs
therefore balancing out each shard for when I planned to upgrade nodes.
Despite having my data, it appears that the shards are now completely
unbalanced. I took a look in the logs when marvel showed the change, but
nothing appears to be reported.

Did ES decide to just consolidate my allocations? Is there a way to get
this back? I can't re-assign the "old" shards to a node because there is no
indication they ever existing (nothing on disk and settings says 5 shards)
and ES tells you to over-allocate to start because you can't add shards
later.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/483137e2-b8b9-476a-864f-4ef72473b957%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(x0ne) #2

I should also mention, shard allocation shows 5 shards per index as
"unassigned". What's strange about these unassigned shards is that their
shard numbers match the currently active and assigned shards and still do
not add up to 16 total. When trying to assign the shards back to the proper
node, I get an error saying that the shard has already been attached to
that node or that the shard I specified does not exist.

On Monday, June 30, 2014 9:46:15 AM UTC-4, x0ne wrote:

I am running a single node (15GB RAM, half allocated on the heap) with ES
(1.2.1) and had allocated two indexes with 16 shards a piece with no
replicas. Health always reported my "cluster" in the yellow due to a lack
of replicas, but aside from that, all was well until this morning.

In total, I am hovering around 65M documents and when I checked the status
of ES this morning, it was still yellow, only I now had 5 shards per index.
All the documents appear to be in place (counts are still the same in
marvel, queries return proper totals), but how did my shard allocation
change?

I had chosen 16 shards to allow for simple routing based on document IDs
therefore balancing out each shard for when I planned to upgrade nodes.
Despite having my data, it appears that the shards are now completely
unbalanced. I took a look in the logs when marvel showed the change, but
nothing appears to be reported.

Did ES decide to just consolidate my allocations? Is there a way to get
this back? I can't re-assign the "old" shards to a node because there is no
indication they ever existing (nothing on disk and settings says 5 shards)
and ES tells you to over-allocate to start because you can't add shards
later.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b4ea3488-5408-49e2-84e4-47dad142bf7b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Mark Walkom) #3

There is no way possible to change the shard count for an existing index
unless you remove the index and then readd the data with the new settings.
Something else must be happening or you may be misunderstanding what is
happening.

A yellow status actually means you have unassigned replica shards, are you
100% sure you don't have replicas set?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 30 June 2014 23:51, x0ne brandon@9bplus.com wrote:

I should also mention, shard allocation shows 5 shards per index as
"unassigned". What's strange about these unassigned shards is that their
shard numbers match the currently active and assigned shards and still do
not add up to 16 total. When trying to assign the shards back to the proper
node, I get an error saying that the shard has already been attached to
that node or that the shard I specified does not exist.

On Monday, June 30, 2014 9:46:15 AM UTC-4, x0ne wrote:

I am running a single node (15GB RAM, half allocated on the heap) with ES
(1.2.1) and had allocated two indexes with 16 shards a piece with no
replicas. Health always reported my "cluster" in the yellow due to a lack
of replicas, but aside from that, all was well until this morning.

In total, I am hovering around 65M documents and when I checked the
status of ES this morning, it was still yellow, only I now had 5 shards per
index. All the documents appear to be in place (counts are still the same
in marvel, queries return proper totals), but how did my shard allocation
change?

I had chosen 16 shards to allow for simple routing based on document IDs
therefore balancing out each shard for when I planned to upgrade nodes.
Despite having my data, it appears that the shards are now completely
unbalanced. I took a look in the logs when marvel showed the change, but
nothing appears to be reported.

Did ES decide to just consolidate my allocations? Is there a way to get
this back? I can't re-assign the "old" shards to a node because there is no
indication they ever existing (nothing on disk and settings says 5 shards)
and ES tells you to over-allocate to start because you can't add shards
later.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/b4ea3488-5408-49e2-84e4-47dad142bf7b%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/b4ea3488-5408-49e2-84e4-47dad142bf7b%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Z7U0Q02n_B0ynA-MnG3iS9MDvzHea%2BoQ5M2WBNJZDK_w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(system) #4