but as the indices grow, i would probably run out of heap space.
So my question is can i do the search in say increments of 1000?
so something like this...
while size < 1000
count = 1000
#curl -XGET '10.55.91.111:9200/house/node2/_search?scroll=10m&size=<count>' -d
_search the first 1000
process the information
now get the next 1000
process the information
until i hit 10000
I saw something where the curl command allows you to search from 1 to 1000,
then search from 1001 to 2000...etc..
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.