Hello All,
I'm facing issue when I'm trying to download data more than 10000 records.
I've 25000 documents in my elastic index and all these documents are unique(logsatsh doc_as_upsert) used.The challenge is elastic restricts to fetch aggregatable data more than 10000
records.
Intention is to download whole data that is available as data is unique per document.How can this be achieved using elastic or using java backend logic?
The reports can be fetched for 3 months ,6 months or year etc.Hence volume of data would be large,Need to generate excel file for reports but unable to get data more than 10000.
Tried scroll api,but I suppose that don't serve the purpose.Does scroll API works for aggregation data I suppose no.Java backend logic is wriiten to fetch data and download in excel.
How does huge reporting data download works in elastic ?,like reports daily,monthly,yearly etc.
I see many mention of scroll API but how its implemented exactly not getting it and not sure in my case it would help.
Any suggestion would be helpful.Elk version 7.9.1.
GET /cis-monitor-campaignreport-*/_search
{
"size": 0,
"aggs": {
"uniqueIdList": {
"terms": {
"field": "campaignreport.uniqueId.keyword"
}
}
}
The below query work fine for data upto 10000,but fails for anything above,In my case 30000 documents size.
GET /cis-monitor-campaignreport-*/_search
{
"size": 20000,
"aggs": {
"uniqueIdList": {
"terms": {
"field": "campaignreport.uniqueId.keyword"
}
}
}
}
index.max_result_window : I would like to avoid this config as its may have impact on cluster.What are the alternatives?
Many Thanx