Since there exists an ability to precalculate hash values for hyperloglog, it looks straight to use it for the cardinality calculation on import time pre-aggregated records, i.e someone calculates a murmur3 hash value over a high-cardinality field for some records with a common dimensions and puts the result into the single ES doc containing the calculated hash value in the hash subfield of this field (with the plain cardinality as value of field itself), and then one could use it in cardinality aggregation on this field.
Could it be used this way already or is there something against? Am I miss something?
Other interesting thing to discuss should be the ingest mode functionality for such pre-aggregation.