Why would I get a different alternating hits total for subsequent match_all
search requests (i.e. curl http://localhost:9200/_search) with zero failed?
i.e. The first time I get 8503251, the next time I get 8479263, the next
time I get 8503251, the next time I get 8479263, and so on, and so on.
It shouldn't happen if you are not performing updates in parallel and don't
have documents with a TTL. Maybe you have a plugin that persists data in an
index that is taken into account when summing all hits in all indices?
(would be helpful to know the index which has changing counts).
Why would I get a different alternating hits total for subsequent
match_all search requests (i.e. curl http://localhost:9200/_search) with
zero failed?
i.e. The first time I get 8503251, the next time I get 8479263, the next
time I get 8503251, the next time I get 8479263, and so on, and so on.
We are definitely not performing any updates in parallel nor do we have any
TTLs. Even if we did though, why would the counts alternate on subsequent
requests?
As far as plugins, we are using the analysis-phonetic plugin but
considering this is a match_all query, I wouldn't think it would be a
factor.
How would I determine which index count is different?
On Wednesday, February 5, 2014 11:44:16 AM UTC-5, Adrien Grand wrote:
It shouldn't happen if you are not performing updates in parallel and
don't have documents with a TTL. Maybe you have a plugin that persists data
in an index that is taken into account when summing all hits in all
indices? (would be helpful to know the index which has changing counts).
Sorry, I thought you said that document counts were changing but it's
actually more than that it's going back and forth to 2 different values
(didn't get that).
Regarding fixing, the easiest way would be to set the number_of_replicas
setting to 0 and then back to its original value: this will deallocate
replica shards and then allocate them again and they will get copied from
the primaries.
I did try a refresh via elasticsearch-head but it did not fix the problem
with the replicas.
Regarding fixing, the easiest way would be to set the number_of_replicas
setting to 0 and then back to its original value: this will deallocate
replica shards and then allocate them again and they will get copied from
the primaries.
I actually fixed it with the cluster move requests but this is good to know
too.
Take a look at the 'preference' property of the search request body. If you use the user's session id as a custom string, it should prevent the jumping around of values.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.