total_shards_per_node and node failure

Hi,

Had a question on the setting "total_shards_per_node". If I set an index up
to have:

-- shards = # of nodes in my cluster
-- total_shards_per_node of 2
-- replicas set to 1

Then I would get an even distribution of shards and replicas across my
cluster, make sense. The question is what happens if I have a node failure?
Does the cluster fail to allocate new replicas for the shards that were on
the failed node? Does that mean I have potential data loss if any two nodes
fail, no matter the size of the cluster?

I did some digging in the code and that seems to be the case but wanted to
confirm I haven't missed anything.

Thanks

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/total-shards-per-node-and-node-failure-tp4026488.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

--

That is the case, afaik - we always try to have a little spare capacity by
setting number of indices to n-(1 or 2), so we can quickly reestablish
redundancy when we lose a node. Rack awareness can also help with this a
little.

On Tuesday, December 4, 2012 8:14:27 PM UTC-6, Jeff Rick wrote:

Hi,

Had a question on the setting "total_shards_per_node". If I set an index
up
to have:

-- shards = # of nodes in my cluster
-- total_shards_per_node of 2
-- replicas set to 1

Then I would get an even distribution of shards and replicas across my
cluster, make sense. The question is what happens if I have a node
failure?
Does the cluster fail to allocate new replicas for the shards that were on
the failed node? Does that mean I have potential data loss if any two
nodes
fail, no matter the size of the cluster?

I did some digging in the code and that seems to be the case but wanted to
confirm I haven't missed anything.

Thanks

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/total-shards-per-node-and-node-failure-tp4026488.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--