Scroll aggregations


(Deepjot Singh) #1

hello friends,
I have an index with some 10m records.
When I try to find distincts in one field (around 2m) my Java runs out of
memory.
Can I implement a scan and scroll on this aggregation to retrieve the same
data in smaller parts.

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e0c92441-d1e7-44bc-bf47-940279553ebd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(ElasticSearch Users mailing list) #2

Unfortunately not at the moment. However, you could look into spreading the
data around with more shards/nodes (thus lesser memory requirements per
node), or add more RAM, or possibly use disk-based fielddata:

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/02c68090-93d9-4175-8396-1dd8ab22c730%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(system) #3