Terms Aggregation not returning keys

Hi everyone,

So recently we ran into a problem using elastic, we are attempting to detect records which have a duplicate value (hash) and to patch them with a flag so that we iterate all records as we go. (27 million records in total, out of which 6 million populate with said hash).

This workflow worked fine on an index sitting on a single node. Eventually the index grew and we moved it to multiple nodes so that we retain performance.

After the move the aggregation result was not accurate anymore as some of the records which we know for sure have keys with more than 1 doc count, do not get returned. I tried to run the query with a min_doc_count of 1 and again the aggregation does not return some of the values (hashes) which should be returned.

In the query below if we remove the must_not and add a different condition which should return the missing hashes will not work either. If we add a condition defining an explicit equal: key = value (hash) which we know is missing then the aggregation will return the correct result and count, otherwise the key is missing all together.

I am hoping that there are a few of few who can explain what is going on and if there is something that we might be doing wrong or if there is a way to rectify the situation.

 {
  "query": {
    "bool": {
      "must_not": [
        {
          "bool": {
            "filter": [
              {
                "exists": {
                  "field": "$type",
                  "boost": 1
                }
              },
              {
                "term": {
                  "kmeta:Misc": {
                    "value": "KBXD-R-1611392400028",
                    "boost": 1
                  }
                }
              }
            ],
            "adjust_pure_negative": true,
            "boost": 1
          }
        }
      ],
      "adjust_pure_negative": true,
      "boost": 1
    }
  }
  "aggregations": {
    "kmeta:fileHash": {
      "terms": {
        "field": "kmeta:fileHash",
        "size": 10000,
        "shard_size": 10000,
        "min_doc_count": 2,
        "shard_min_doc_count": 0
      }
    }
  }
}

The problem is that randomly-sharded data is suited to finding the most popular things only when the frequency of those top things are > number of shards.

Assuming there's millions of hashes that match your query and they are spread somewhat randomly across shards how would multiple remote shards independently decide on the same subset of 10k (shard_size) terms that would guarantee finding the duplicates? Each of the millions of hashes on each shard occur only once so they're all equally promising candidates in this isolated view.
When the required global frequency is low (2 in your example), and there are millions of values to choose from then you have to get each shard to focus analysis on the same subset of all the matching terms in a request e.g. just the hashes beginning with "a".
A more effective way to do this is the term partitioning feature in the terms agg. It means you have to make multiple requests, one for each partition, but the results can be made accurate.

Another alternative is to reindex using routing to send docs with the same hash to the same shard.

Hi Mark,

Thank you very much for your answer, the query constraint is what we patch as we iterate in a loop, so that records returned are excluded for the next page. But as I mentioned patching all records found in the terms aggregation with no min doc count constraint will still not go through some of the hashes (I do not even care about the accuracy of the count at this time) . I also tried what you suggested with splitting the aggregation in parts, the missing hashes are still not returned in the set. The only way I was able to get the hash to be part of the aggregation was to add an equal constraint on it.

Do you have any other suggestions without reindexing?

I will document myself about the reindexing suggestion but I think that will be my last option

Thank you yet again for your help in this matter,
Gabe.

Put a cardinality agg on the hash field. If the count it returns is > shard_size then your partition_size is too small.

Hey Mark,

The last set on which I am doing my tests should return less than 10000 which is less than the default maximum allowed buckets for an aggregation. A single aggregation will return the same result as the partitioned requests.

I have tried to iterate through them using 20x 500 partitions and another 10 x 1000 partitions.

None of the sets include the missing hashes :thinking:. Just wondering if there is some hidden mechanic which excludes these missing hashes. If it was a matter of accuracy I should have been able to get a match on them even if their doc count was 1 but they are missing altogether unless I add a strict equal query to match a specific hash.

Try partitioning but with shard_min_doc_count = 1
I think 0 may try return terms that don’t match the query too.

Hi Mark,

I believe I have tried it before without success but I tried it yet again just to make sure and the results are still not returning the missing hashes. Both shard_min_doc_count 0 and 1 return the same result, without the missing hashes.

This should be working (otherwise there's a bug).
I think we'll need to see some JSON to see what's up.

Can you share :

  1. The partitioned agg request which you tried and doesn't work
  2. The "hash == x" request you did to prove the duplicate hashes are there
  3. An example of the docs (relevant fields only) which are duplicates.

Thanks

Hi Mark,

I will try to prepare this for you, a few clarifications:

  1. Do you wish min shard count to be 0 or 1 ?
  2. Do you with to have the results of a query with min_doc_count 1 or 2 ? (with 1 there will be more results)

I will start preparing the data for you and thank you very much for helping out with this one!

Kind regards,
Gabriel K.

final min_doc_count = 2 (to only find the duplicates)
shard_min_doc_count = 1 (to only find docs that match you query at least once)

Is the query important here? We'll only find duplicate hashes if they exist in docs that match the query

I am using the query to restrict the amount of records returned so that it is easier to focus on one hash instance which is missing from the result set.

I am collecting and cleaning the data so it should be here soon :slight_smile:

Relavant metadata stored for the given hash
&
Query 1 and results returning one of the missing hashes:

Query 2 and results returning the set not containing the result of Query 1:
Part 1: Query 2 returning 10 pages not containing the result of Query 1 : [PART 1/2]G - Pastebin.com
Part 2: Query 2 Part 2 - Pastebin.com

Let me know if there's anything more I can do to help :slight_smile:

Thanks for that. I can see from the results that doc_count_error_upper_bound >0 which means that the not all of the relevant data is being returned for consideration to the coordinating node.

This means that the number of partitions is too low for the size of results being considered. By increasing the number of partitions you will be reducing the number of terms being considered in any one request to a manageable subset and you should see the doc_count_error_upper_bound value become zero (meaning nothing was left behind on shards).

1 Like

I understand, I will tweak my test so that the error upper bound becomes 0 and get back to you.

1 Like

Hi Mark,

Good news, I was finally able to retrieve the hash which was missing part of the partitioned result set, thank you very much for the breakthrough.

I do have one more question, how would I go about finding the correct amount of partitions for a certain page size so that the errors will be 0, is there some sort of formula or algorithm I could use ? so I do not have to do guess work to get the optimal values.

Gabe.

Great stuff!

The docs include some guidance. I notice they say tweak settings until partitions have sorted results that include things you don't want (e.g. terms that only occur once).
They should also say pay attention to the doc_count_error_upper_bound to make sure the calculations are including all counts from all shards.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.