Fetch 200M documents with slice and scroll


(Harshil) #1

Hi, We have a requirement to fetch ~200M documents. To make this work in parallel, I am using slice and scroll API, and fetching 10,000 documents per page. I know scroll API will run a query and takes a snapshot of the matched documents and keep it alive till the TTL. I wanted to understand how slice works with scrolling? Let's say my query matches 25M documents, but I have given slice:{"id": 0, "max": 5} it will roughly split 25M in 5M/slice, will it keep the snapshot live of all 25M documents or the snapshot is just for that slice id(here slice_id:0)?


(system) #2

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.