I would like to get documents from an index that contains a huge number of data ~ (1 million).
I am using ElasticSearchClient to connect and get information from Elasticsearch. I tested the solution with a small number of data and it works well. But I got an error while testing with size(105000). Do you have an idea how to solve the problem ?
Below the implementation of the connexion, the request query and the error
Please don't post pictures of text, logs or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them
Ok thank you I take note. I updated some parameter setting on the index in order to enhance the result window.
PUT /MyIndex/_settings
{
"index" : {
"max_result_window" : 2100000
}
}
I updated the request on the code in order to get the total hits : .trackTotalHits(t->t.enabled(true))
What is the specification of your cluster? How much heap do you have assigned?
Increasing that limit will put a lot more load on the cluster and it is not clear it is able to handle this. Is there anything in the Elasticsearch logs?
the size and from parameters to display by default up to 10000 records to your users. If you want to change this limit, you can change index.max_result_window setting but be aware of the consequences (ie memory).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.