hello,
is there a way to determine the value of the "size" parameter, to make an import, using scan/scroll, to be the fastest?
for example, is "size=100000" too much for elasticsearch ?
hello,
is there a way to determine the value of the "size" parameter, to make an import, using scan/scroll, to be the fastest?
for example, is "size=100000" too much for elasticsearch ?
Having a very large the size can be dangerous, as it grabs that number of documents for all shards in an index.
So if you change this to 100000 and you have 10 shards, that is 1000000 documents it will fetch, which will have an impact on heap use.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.