Issue in extracting all available data in index in elastic using python library elasticsearch

Hi, I am facing below issue.

I want to extract full dump that is all available documents from index "X" using python code and save it to json file. But currently M able to extract few documents only using below command. For this I am using elasticsearch python librarry

res ="X",body={"query":{"match_all":{}}})

kindly suggest


I changed the function and tried as given below.

res = scan(es, query={"query": {"match_all":{}}}, scroll="5m", index="X", raise_on_error=True, preserve_order=False, clear_scroll=True)

but still not able to extract all available hits (approx 3,55,500 hits)

Kindly suggest how to extract all hits using python code

You will want to use the scan and scroll API for that, _search is not designed to collect every document.

Thanks @warkolm !

Is there a way to collect every document of particular day or timestamp instead of all hits ?

Kindy suggest

Hi team,

Is there an extension or connector through which we can extract all documents from elastic index into csv file without python code?

Kindly let me know

I recall hearing about elasticdump (third party component not supported here) in the past but am not sure it supports exporting CSV and am not sure about version compatibility. Have not used it myself.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.