Lets say I have 1 index split over 5 shards with 1 replica of that index
all distributed across 2 nodes.
So in this example *node 1 *looks like:
index shard 1
index shard 2
index shard 3
replica index shard 4
replica index shard 5
And node 2 looks like this:
index shard 4
index shard 5
replica index shard 1
replica index shard 2
replica index shard 3
As per the "users" data flow I will be routing all index requests for a
particular user to 1 shard. In this example, lets say that "user 1" is
routed to "index shard 1" so all user 1's documents are stored on shard 1.
My two questions are:
1. what happens if node 1 goes down?
I understand that user 1 will still be able to read the contents of his
data store - I'm assuming all read requests for user 1 will be directed to
"replica index shard 1". But what will happen to any new documents that
user 1 wants to add? The replica is read only (as I understand it) so will
all new index requests fail or will they be added to another shard? If
they're added to another shard then will the routing for user 1 now point
to 2 shards?
2. what happens if there's not enough space on 1 particular shard for a
new user but there is enough across 2 shards?
This is purely hypothetical because I'm interested. I understand I'd have
bigger problems if I came across this situation in production and that I
should be over-allocating shards in this scenario anyway. As I said, just
curious
If I haven't been clear on anything then let me know and update.
Lets say I have 1 index split over 5 shards with 1 replica of that index
all distributed across 2 nodes.
[...]
1. what happens if node 1 goes down?
I understand that user 1 will still be able to read the contents of
his data store - I'm assuming all read requests for user 1 will be
directed to "replica index shard 1". But what will happen to any
new documents that user 1 wants to add? The replica is read only
(as I understand it) so will all new index requests fail or will
they be added to another shard? If they're added to another shard
then will the routing for user 1 now point to 2 shards?
When a primary goes away, one of its replicas is promoted to
primary. So you're right that a replica is read-only, only that it
doesn't have to remain a replica forever.
2. what happens if there's not enough space on 1 particular shard
for a new user but there is enough across 2 shards?
Once ES targets data for a shard, whether by hashing your routing
value or the doc id, that's where it's going. If there isn't disk
space available, the Lucene index (just for that shard) will likely
be corrupted.
In the relatively near future, we will support disk calculations for
more intelligent shard allocation, but that won't help the indexing
issue. The latter is quite a tricky one to solve, mostly because ES
and Lucene both try to touch the disk as seldom as possible.
Lets say I have 1 index split over 5 shards with 1 replica of that index
all distributed across 2 nodes.
[...]
1. what happens if node 1 goes down?
I understand that user 1 will still be able to read the contents of
his data store - I'm assuming all read requests for user 1 will be
directed to "replica index shard 1". But what will happen to any
new documents that user 1 wants to add? The replica is read only
(as I understand it) so will all new index requests fail or will
they be added to another shard? If they're added to another shard
then will the routing for user 1 now point to 2 shards?
When a primary goes away, one of its replicas is promoted to
primary. So you're right that a replica is read-only, only that it
doesn't have to remain a replica forever.
2. what happens if there's not enough space on 1 particular shard
for a new user but there is enough across 2 shards?
Once ES targets data for a shard, whether by hashing your routing
value or the doc id, that's where it's going. If there isn't disk
space available, the Lucene index (just for that shard) will likely
be corrupted.
In the relatively near future, we will support disk calculations for
more intelligent shard allocation, but that won't help the indexing
issue. The latter is quite a tricky one to solve, mostly because ES
and Lucene both try to touch the disk as seldom as possible.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.