Total_shards_per_node leading to unassigned shards

Hello:

I'm running 0.90.0 in a 3 node cluster. I have number_of_shards = 3 and
number_of_replicas = 1. Recently I set total_shards_per_node = 2 so that I
could 1) ensure when I lose a node, ES doesn't try to rebalance on the
remaining 2 nodes (I don't have the disk space as I found out earlier) and
2) ensure uniform distribution of the shards. If there's a better way to
meet 1) then I'm all for it.

Unfortunately what I'm seeing occasionally is this... "stuck" allocations
where replica 2 doesn't have anywhere to go so I have to manually move 0 or
1 to make room.
https://lh4.googleusercontent.com/-Fed1RYUfjYE/Up31CYLpf3I/AAAAAAAABBI/bS6L7-0w87Q/s1600/2013-12-03+09_12_36-ElasticSearch+Head.png

Is this fixed in a later version? Or a known issue? How can I help?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f9bf7e9f-2981-403a-b8b5-7fe784c1b422%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Anyone?

On Tuesday, December 3, 2013 9:14:27 AM UTC-6, Andrew Ochsner wrote:

Hello:

I'm running 0.90.0 in a 3 node cluster. I have number_of_shards = 3 and
number_of_replicas = 1. Recently I set total_shards_per_node = 2 so that I
could 1) ensure when I lose a node, ES doesn't try to rebalance on the
remaining 2 nodes (I don't have the disk space as I found out earlier) and
2) ensure uniform distribution of the shards. If there's a better way to
meet 1) then I'm all for it.

Unfortunately what I'm seeing occasionally is this... "stuck" allocations
where replica 2 doesn't have anywhere to go so I have to manually move 0 or
1 to make room.

https://lh4.googleusercontent.com/-Fed1RYUfjYE/Up31CYLpf3I/AAAAAAAABBI/bS6L7-0w87Q/s1600/2013-12-03+09_12_36-ElasticSearch+Head.png

Is this fixed in a later version? Or a known issue? How can I help?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9dcd0e85-945a-44b4-8da0-69868d464322%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.