total_shards_per_node and node failure

Hi,

Had a question on the setting "total_shards_per_node". If I set an index
up to have:

-- shards = # of nodes in my cluster
-- total_shards_per_node of 2
-- replicas set to 1

Then I would get an even distribution of shards and replicas across my
cluster, make sense. The question is what happens if I have a node
failure? Does the cluster fail to allocate new replicas for the shards
that were on the failed node? Does that mean I have potential data loss if
any two nodes fail, no matter the size of the cluster?

I did some digging in the code and that seems to be the case but wanted to
confirm I haven't missed anything.

Thanks

--

My understanding is that while you wouldn't get data /loss/, the shards on
that node would be unavailable while that node is unavailable. Since those
shards exist on other nodes as well, they can still be queries and data
inserted into them. When a replacement node is added, they can be used to
restore the unassigned shards on the new node.

On Wednesday, December 5, 2012 10:39:50 AM UTC-8, Jeff Rick wrote:

Hi,

Had a question on the setting "total_shards_per_node". If I set an index
up to have:

-- shards = # of nodes in my cluster
-- total_shards_per_node of 2
-- replicas set to 1

Then I would get an even distribution of shards and replicas across my
cluster, make sense. The question is what happens if I have a node
failure? Does the cluster fail to allocate new replicas for the shards
that were on the failed node? Does that mean I have potential data loss if
any two nodes fail, no matter the size of the cluster?

I did some digging in the code and that seems to be the case but wanted to
confirm I haven't missed anything.

Thanks

--

So understand that the data would still be there when the node came back,
but if you lost one node that had the primary shard, it won't be able to
create a new replica, correct? If that is the case then if you lost two
nodes 1 that had the primary and then later lost a second node that had the
replica you would experience data loss.

Thanks

On Wednesday, December 5, 2012 1:55:54 PM UTC-5, Dan Lecocq wrote:

My understanding is that while you wouldn't get data /loss/, the shards on
that node would be unavailable while that node is unavailable. Since those
shards exist on other nodes as well, they can still be queries and data
inserted into them. When a replacement node is added, they can be used to
restore the unassigned shards on the new node.

On Wednesday, December 5, 2012 10:39:50 AM UTC-8, Jeff Rick wrote:

Hi,

Had a question on the setting "total_shards_per_node". If I set an index
up to have:

-- shards = # of nodes in my cluster
-- total_shards_per_node of 2
-- replicas set to 1

Then I would get an even distribution of shards and replicas across my
cluster, make sense. The question is what happens if I have a node
failure? Does the cluster fail to allocate new replicas for the shards
that were on the failed node? Does that mean I have potential data loss if
any two nodes fail, no matter the size of the cluster?

I did some digging in the code and that seems to be the case but wanted
to confirm I haven't missed anything.

Thanks

--

If you lose the primary shard, the replica can be promoted to be the
primary. I'm not entirely sure of when ES makes that decision, but it does
happen. So you could bring in a completely blank node as a replacement, and
it would generate a replica of the lost shard from the newly-promoted
primary.

Yes, if you lost the primary and lost the replica before you bring in a
replacement, then you'd lose the data. That is, supposing that you don't
have a gateway set up to restore it.

What kind of node loss in particular are you thinking about? That the node
completely disappears? Or completely dies? Or just if it stops
elasticsearch / etc.?

On Wednesday, December 5, 2012 12:07:22 PM UTC-8, Jeff Rick wrote:

So understand that the data would still be there when the node came back,
but if you lost one node that had the primary shard, it won't be able to
create a new replica, correct? If that is the case then if you lost two
nodes 1 that had the primary and then later lost a second node that had the
replica you would experience data loss.

Thanks

On Wednesday, December 5, 2012 1:55:54 PM UTC-5, Dan Lecocq wrote:

My understanding is that while you wouldn't get data /loss/, the shards
on that node would be unavailable while that node is unavailable. Since
those shards exist on other nodes as well, they can still be queries and
data inserted into them. When a replacement node is added, they can be used
to restore the unassigned shards on the new node.

On Wednesday, December 5, 2012 10:39:50 AM UTC-8, Jeff Rick wrote:

Hi,

Had a question on the setting "total_shards_per_node". If I set an
index up to have:

-- shards = # of nodes in my cluster
-- total_shards_per_node of 2
-- replicas set to 1

Then I would get an even distribution of shards and replicas across my
cluster, make sense. The question is what happens if I have a node
failure? Does the cluster fail to allocate new replicas for the shards
that were on the failed node? Does that mean I have potential data loss if
any two nodes fail, no matter the size of the cluster?

I did some digging in the code and that seems to be the case but wanted
to confirm I haven't missed anything.

Thanks

--