How to generate big size of "generate" data type file in python

Hello all,
Im using the "Helpers" dictionary (for the "scan" method) for query in python environment to the ELK.
The query reply a data type file as a "generate".
Now in order to to something with this data-type im generate it to a list but its taking alot of time, around 30 sec for 200k fields.
Is there any way to make it faster?
Is the ELK have to reply for a query in "generate" data type?
thanks!

Can you provide a snippet of your Python code?

So you're looking to scroll documents and do so in the fastest way?
A couple of things comes to mind, one might be improving the overall speed of your cluster perhaps to get more throughput. You could also try larger or different batch sizes perhaps.

The other might be parallelising your code using sliced scroll.
Since this option dives into a more complex area of python and programming generally, it's not really covered in the scope of the Python client, but I did see an example in a github ticket that you might look at for inspiration:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.