Performance - Very large list of buckets in an aggregation field

Hi,

I have to run aggregation on a very large corpus and pull out facets for
~10-12 fields. All fields except one have decent sized buckets (like, not
more than ~1K at a maximum), however, one field may have a very large
number of buckets. Probably in millions. Will that turn out to be a
performance issue?

All I am interested is in the grouping of the records based on that field.

Is there any best practice on how to achieve this, or is this not a normal
scenario?

Thanks,
SRK

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/82c68775-f0a8-4044-bf6b-f2a975754013%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.