There shouldn't be any errors, just chance of duplicates.
The cluster doesn't really know anything about indexing/delete/update operations on a per-document basis. E.g. the master node coordinating the cluster state has no idea what data is where, or the order of operations.
Instead, each shard manages it's own set of documents and maintains consistent ordering locally (and manages its own set of replicas). This is why a normal delete followed by an index is consistent... the same document is going to the same shard, and the shard enforces the order of operations.
It gets tricky when you introduce custom routing (via
parent). When you change the parent value, the document can route to a different shard, with it's own timeline unrelated to the first shard. That's why duplicates can happen, at least until the refresh is called (internally or externally) to clear out deleted docs.
It's also why there won't be an error... shards only have local knowledge, they don't know what's going on in other shards. So if document ID
foo exists on shardA and shardB at the same time, no one except the client code will notice. The shards don't communicate about that sort of thing. It's also why you have to manually delete child docs if you change their parent, because the shards don't track that kind of detail.
Mostly, sorta, kinda It depends on which threads are talking to the cluster, and in what order. Consider this scenario:
- Delete old child
- Index new child
threadA executes a search
threadB executes a refresh
threadC executes a search
- Refresh finishes
threadB executes a search
So we have three threads. ThreadA executes a search before any refresh happens. ThreadB executes a refresh, waits for it to complete, then executes a search. ThreadC does a search sometime after the refresh stars but before it finishes.
So what happens? We're going to assume here that Lucene doesn't do an internal refresh on it's own, which can happen at any time btw
- ThreadA will only see the old document, because the old child's DELETE and the new child's creation are waiting on the refresh.
- ThreadB will see only the new child. Because it called refresh, waiting for completion, then a search, it is guaranteed to only see the new child. The DELETE and the index will both have "processed" due to the refresh
- ThreadC could see any combination: new + old, just new, just old, or neither. Because its search happened during the refresh, it may hit the shards at any point while they are both refreshing and you can get any combination.
So basically, the takeaway is that there's no "global consistency" here... the cluster doesn't lock itself while doing updates/indexing/deletes. You can get consistent values, but only from the perspective of individual threads (unless you synchronize all your threads/clients/etc). There's nothing wrong with this, it's just the nature of distributed systems.
It's also generally an edge-case that most people don't see. The internal refresh cycle is 1 second, and as I mentioned Lucene often performs refreshes internally as it merges segments due to indexing pressure, so the refresh cycle may be considerably faster.
But if you need consistent views of the data after doing something like moving a child, you'll need to either synchronize all your clients (e.g. so they access the data after a refresh has completed) or rely on per-thread consistency.