The cardinality aggregation is notorious for causing memory issues when nested under a high cardinality term.
It can use a modest amount of memory calculating unique counts but when multiplied by a high number of parent terms this adds up to a lot.
Fortunately you can tune the amount of memory used per count at the cost of the accuracy of that count. This is called the "precision_threshold" where it switches from a strategy of keeping a set of term hashes up to this size and switches to a fuzzier probabilistic way of counting unique values. The default threshold value is 3000 but you can lower it to make big memory gains. In Kibana it looks like this:

(Note - in my test I was using elastic stack 7.2 which didn't go into meltdown. The circuit breaker kicked in with a memory warning and rejected a query similar to yours. Adding the precision_threshold avoided the error but the point is bad queries are handled better with newer versions of the stack).