Hi all,
I have to process 1M records and download 1M records.
So is it practical to use a bucket size of 1Million so that when i download CSV report , it must have 1 Million documents?
Or is there another approach for this?
Hi all,
I have to process 1M records and download 1M records.
So is it practical to use a bucket size of 1Million so that when i download CSV report , it must have 1 Million documents?
Or is there another approach for this?
I don't think you can do that with Kibana (but I may be wrong). To download such large amount of data you can take a look to this other post
with ideas on using Elasticsearch API directly (scrolling, paged search, etc) or third party tooling.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.