I am thinking about making a type of map reduce project using elastic
search as a first level of filtering, like most people know when someone
does map reduce do it across a much larger set of data then they need.
Because I want it to "scale" I dont want to limit the collecting of the
data from elasticsearch to go to one node and then be split up to be sent
to the other nodes. I am thinking about using some sort of skip simular to
how mysql does Limit 5, 10 What I am aware of is that when a skip takes
place it needs to first pull the first x number of records and then run for
the next set meaning that I am going to end up pulling a lot of unneeded
records, anyway around this in elasticsearch?