A single-value metrics aggregation that calculates an approximate count of distinct values. Values can be extracted either from specific fields in the document or generated by a script.
Thank you for your replies.
But I can't find a solution for this, even when I put precision_threshold to it's maximum (40 000).
So there is no solution to get the exact value ?
Maybe think about an approach where you use the cardinality agg in multiple requests, each counting what are guaranteed to be a different set of users totalling < 40,000 in number in each set then add the results of the these numbers up.
To do this use a query on the userID where you take the hashcode and modulo N where N is a number big enough to break the global population down into <40k groups. Each request would filter the docs where the result of "hash modulo N" is 0, then 1, then 2 up to N. This is how "partitioning" works in the terms aggregation.
@Mark_Harwood , Thank you for your reply, it sounds to be a good solution.
But I can't find an example of this, what do you mean by taking the hashcode and modulo N, can you give me a simple example please ?
PS : I know I can solve this using the entity centric concept and count directly the number of users, but I want to solve the problem of exact value, because I use everywhere the uniq count ..
Thank you.
It's a standard trick [1] to break down a large set of values into evenly divided subsets. It's how we choose to route documents to a choice of shard based on the ID. In my proposal we're using the same approach to divide the large set of user IDs evenly into smaller sets totalling less than 40k each.
Maybe the better question is why does absolute accuracy matter to you? While you're running these queries several new users may have turned up and been unaccounted for?
The exact value matters because :
I have made an entity for the users containing (firstTimeConnect) => the count of documents in a period gives the number of new users in this period. After this I can calculate the returning users which depends on exact value of all the users in that period. This will serve for retention matrix later.
It was great on a normal dataset. But now I'm facing big datasets, and i'm having illogic values.
I just wanted to understand more, how can I apply the Partitionable aggregation to my case, even if it will take longer time for execution, but it's ok if It will result an exact value.
To resume : I have documents representing user requests containing the field user_id. I want to know how much uniq user I have.
I don't know if I'm choosing the right number of size & num_partitions.
While i know that the approximate number is near to 200k, so I have made 20 partitions of 10k.
Please @Mark_Harwood if this is the correct way to choose the size & num_partitions.
Thank you.
Looks about right.
My proposal was to still use the cardinality agg but feed it the results of a query with 1/Nth of the user IDs. It's the same principle as you have here but would just return the count of unique user IDs in a partition rather than what you have here which is an exhaustive list of all the actual user IDs in a partition.
In either case, remember to check that the number of users returned is at least one less than the partition size you hope for (in your example 10k) otherwise you know that N is too small and you may have overfilled that partition e.g. with 11000 users which means your overall counts could be wrong.
@Mark_Harwood Thank you again. But I am lost with all these variables.
Can you help me implement this please ?
I have a library which make hash from a string and makes it a number between 0 and 4294967295.
As the approximate number of users will be 200k users.
So my N here will be : 200k/20k(precision_threshold) = 10 ?
Can you tell me how to use the cardinality agg in that case ?
My first request will be like this :
Lose the terms agg - your client is not interested in seeing individual values so you only need the cardinality agg.
The trick is to use a query that only examines docs for a single partition.
So you use a script query passing the partition number as a param. The script should return true if hash modulo N of the user ID == the passed partition number.
@Mark_Harwood
Hello, Thank you for reply, but to make this fast, my hasing algorithm must have a restricted intervall, isn't it ? I mean, for example if the maximum for the hash is 4294967295, to guarantee that I have 40k groups, I will need to have 107k partitions which is not logic that I make 107k requests.
Can you just show me an example to my case ?
No, by hashing, your user IDs will be spread sparsely and evenly across the 4bn number range (unless you happen to be facebook in which case it will be dense). The equation you need is not:
4bn / numPartitions = 40k
but
maxExpectedNumUsers / numPartitions = 40k
Also, in your code example you're not taking the hashcode of the user_id value
You could also hash the user_id before indexing the document and store the calculated partition id in a separate field in the document. You can then simply filter on this in the query instead of using a script.
If you want you can also create a larger number of partitions initially and query for a range (which you can reduce as cardinality increases).
As you can see, i'm sure that the all parts are under 40k, and I have specified the threshold to 40k.
Is there something wrong ?
PS : I have indexed the hash from the begin.
Something fishy going on there.
You might have to compare sorted lists of user ids from the various requests to debug what's going on.
If you have a doc that has an array of user_ids not a single value that might be a source of deviation
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.