maxDocs different between primary and replica shards

We're running Elasticsearch (currently 0.90.6) in what I'd call a
"replicated" architecture: our indexes are quite small (tens of thousands
of documents) and fit easily on a single machine, so we allocate a single
shard per index. However, we make sure that they are replicated to each
node of our cluster. The whole approach ensures that each application
server has its own "local" ES with all data of an index and can keep
working autonomously if others fail. This works alright so far.

Now, we're seeing small but visible score discrepancies between ES nodes,
specifically between the primary shard and the replicas. Using explain, we
found out that the difference is in the maxDocs value. As known and
documented, deleted documents may still contribute to the maxDocs value
(and thus, affect TF-IDF scores). That's not a problem per se.

The problem is rather that maxDocs is different between the primary and the
replica shards (until we restart ES or force a merge using the optimize
call). Depending on whether the primary or a replica is hit with the exact
same query, we get different scores because the maxDocs value is different
by exactly the number of documents that have been deleted previously.

Is there any way to ensure that maxDocs is the same on primary and replica
shards, short of forcing a costly merge?

(Using DFS queries or not makes no difference, as I would expect from my
understanding of them - the index isn't really distributed, it's
replicated.)

Thanks

Klaus

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b81f3a1e-f6b1-4e72-91ec-a337036d5b18%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I exactly have the same issue!
Does someone have solution for this?

Thanks,
Csaba

  1. november 28., csütörtök 14:26:51 UTC+1 időpontban Klaus Brunner a
    következőt írta:

We're running Elasticsearch (currently 0.90.6) in what I'd call a
"replicated" architecture: our indexes are quite small (tens of thousands
of documents) and fit easily on a single machine, so we allocate a single
shard per index. However, we make sure that they are replicated to each
node of our cluster. The whole approach ensures that each application
server has its own "local" ES with all data of an index and can keep
working autonomously if others fail. This works alright so far.

Now, we're seeing small but visible score discrepancies between ES nodes,
specifically between the primary shard and the replicas. Using explain, we
found out that the difference is in the maxDocs value. As known and
documented, deleted documents may still contribute to the maxDocs value
(and thus, affect TF-IDF scores). That's not a problem per se.

The problem is rather that maxDocs is different between the primary and
the replica shards (until we restart ES or force a merge using the optimize
call). Depending on whether the primary or a replica is hit with the exact
same query, we get different scores because the maxDocs value is different
by exactly the number of documents that have been deleted previously.

Is there any way to ensure that maxDocs is the same on primary and replica
shards, short of forcing a costly merge?

(Using DFS queries or not makes no difference, as I would expect from my
understanding of them - the index isn't really distributed, it's
replicated.)

Thanks

Klaus

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/aa7bfbea-8e81-474a-bc5c-edda55e707a5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Same problem...

Bueller? Bueller?

On Thursday, May 8, 2014 at 1:07:44 PM UTC-7, Csaba Dezsényi wrote:

I exactly have the same issue!
Does someone have solution for this?

Thanks,
Csaba

  1. november 28., csütörtök 14:26:51 UTC+1 időpontban Klaus Brunner a
    következőt írta:

We're running Elasticsearch (currently 0.90.6) in what I'd call a
"replicated" architecture: our indexes are quite small (tens of thousands
of documents) and fit easily on a single machine, so we allocate a single
shard per index. However, we make sure that they are replicated to each
node of our cluster. The whole approach ensures that each application
server has its own "local" ES with all data of an index and can keep
working autonomously if others fail. This works alright so far.

Now, we're seeing small but visible score discrepancies between ES nodes,
specifically between the primary shard and the replicas. Using explain, we
found out that the difference is in the maxDocs value. As known and
documented, deleted documents may still contribute to the maxDocs value
(and thus, affect TF-IDF scores). That's not a problem per se.

The problem is rather that maxDocs is different between the primary and
the replica shards (until we restart ES or force a merge using the optimize
call). Depending on whether the primary or a replica is hit with the exact
same query, we get different scores because the maxDocs value is different
by exactly the number of documents that have been deleted previously.

Is there any way to ensure that maxDocs is the same on primary and
replica shards, short of forcing a costly merge?

(Using DFS queries or not makes no difference, as I would expect from my
understanding of them - the index isn't really distributed, it's
replicated.)

Thanks

Klaus

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e884ef70-fc28-4125-8190-0399173f6e04%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

Merging of segments and the resulting removal of deleted documents is not
coordinated across nodes in Elasticsearch, meaning that the amount of
deleted documents can differ between primary and replica shards. Optimising
an index down to a single segment does resolve this, but can as noted be
quite costly. One way to get around this might be to use a custom
preference parameter [1] to ensure that you always end up hitting the same
shard for related queries. This could give each user much more consistent
results even when shards are not synchronised while still allowing you to
spread out the query load across all shards.

[1] Request body search | Elasticsearch Guide [8.11] | Elastic

Best regards,

Christian

On Thursday, November 28, 2013 at 1:26:51 PM UTC, Klaus Brunner wrote:

We're running Elasticsearch (currently 0.90.6) in what I'd call a
"replicated" architecture: our indexes are quite small (tens of thousands
of documents) and fit easily on a single machine, so we allocate a single
shard per index. However, we make sure that they are replicated to each
node of our cluster. The whole approach ensures that each application
server has its own "local" ES with all data of an index and can keep
working autonomously if others fail. This works alright so far.

Now, we're seeing small but visible score discrepancies between ES nodes,
specifically between the primary shard and the replicas. Using explain, we
found out that the difference is in the maxDocs value. As known and
documented, deleted documents may still contribute to the maxDocs value
(and thus, affect TF-IDF scores). That's not a problem per se.

The problem is rather that maxDocs is different between the primary and
the replica shards (until we restart ES or force a merge using the optimize
call). Depending on whether the primary or a replica is hit with the exact
same query, we get different scores because the maxDocs value is different
by exactly the number of documents that have been deleted previously.

Is there any way to ensure that maxDocs is the same on primary and replica
shards, short of forcing a costly merge?

(Using DFS queries or not makes no difference, as I would expect from my
understanding of them - the index isn't really distributed, it's
replicated.)

Thanks

Klaus

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/41f176bf-86c3-44f4-bbc6-4a075b44c3ab%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thanks for your reply Christian. This helped a lot and I eventually learned
that this is called the "bouncing results"
problem: Search Options | Elasticsearch: The Definitive Guide [2.x] | Elastic

I don't quite understand why it isn't lower cost to make each shard
identically mirror each other (or at least the scores - even if not
immediately). At the moment the search preference is a sufficient
work-around, though in my use case I cannot use arbitrary strings because
all users share the same query and results; I have to use something like
"primary_first" so I'd guess this does reduce cluster scalability to some
degree. Do you think this will be improved in the future?

But thanks again for your reply!

On Wednesday, April 22, 2015 at 12:57:22 AM UTC-7,
christian...@elasticsearch.com wrote:

Hi,

Merging of segments and the resulting removal of deleted documents is not
coordinated across nodes in Elasticsearch, meaning that the amount of
deleted documents can differ between primary and replica shards. Optimising
an index down to a single segment does resolve this, but can as noted be
quite costly. One way to get around this might be to use a custom
preference parameter [1] to ensure that you always end up hitting the same
shard for related queries. This could give each user much more consistent
results even when shards are not synchronised while still allowing you to
spread out the query load across all shards.

[1]
Request body search | Elasticsearch Guide [8.11] | Elastic

Best regards,

Christian

On Thursday, November 28, 2013 at 1:26:51 PM UTC, Klaus Brunner wrote:

We're running Elasticsearch (currently 0.90.6) in what I'd call a
"replicated" architecture: our indexes are quite small (tens of thousands
of documents) and fit easily on a single machine, so we allocate a single
shard per index. However, we make sure that they are replicated to each
node of our cluster. The whole approach ensures that each application
server has its own "local" ES with all data of an index and can keep
working autonomously if others fail. This works alright so far.

Now, we're seeing small but visible score discrepancies between ES nodes,
specifically between the primary shard and the replicas. Using explain, we
found out that the difference is in the maxDocs value. As known and
documented, deleted documents may still contribute to the maxDocs value
(and thus, affect TF-IDF scores). That's not a problem per se.

The problem is rather that maxDocs is different between the primary and
the replica shards (until we restart ES or force a merge using the optimize
call). Depending on whether the primary or a replica is hit with the exact
same query, we get different scores because the maxDocs value is different
by exactly the number of documents that have been deleted previously.

Is there any way to ensure that maxDocs is the same on primary and
replica shards, short of forcing a costly merge?

(Using DFS queries or not makes no difference, as I would expect from my
understanding of them - the index isn't really distributed, it's
replicated.)

Thanks

Klaus

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c07d7645-b41e-4e24-93a4-a558e1baa424%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.