Assume we have an Elasticsearch cluster with 3 nodes n1, n2 and n3. We have created an index on this cluster with one shard and two replicas with a significant amount of data in (one shard of the index is on each node). Then assume node n1 stops working. After this, new data is added and some old data is deleted from the index. After some time node n1 comes up again and the changes to the index are replicated to the shard on n1.
I would be interested in how the replication is implemented. I assume that not all data but only the changes during the time when node n1 was down were transferred? Is this realized via some kind of oplog? Is there a maximal number of changes which are applied during the downtime of n1 which could be replicated and if this number is exceeded is then a complete shard replicated?
Thank you very much in advance and best regards,