I am getting ScanError (ScanError('Scroll request has only succeeded on 7 (+5 skipped) shards out of 15.')) when the search results is large (mostly when it is more than 10k).
I have a few questions about it:
- What is the underlying reasons for this issue ?
- Is this related to number of shards or the number of search results ?
- Is there a way to handle this by some elasticsearch property setting or query parameters or scaling or shard count/settings ?
- I found a suggestion that can avoid this by using the flag (raise_on_error in the python library) but that will result in suppressing the exception and returning incomplete results.
Please let me know what's the correct way of solving this issue.
What is your version?
What does your code look like?
Did you try the PIT + search after method?
The scroll doc is saying:
We no longer recommend using the scroll API for deep pagination. If you need to preserve the index state while paging through more than 10,000 hits, use the
search_after parameter with a point in time (PIT).
Hi @dadoonet - I am actually using opensearch 2.5 and i believe the errors is the same as some user have mentioned above.
Do you know what is the reason for the error ?
Thanks for the PIT reference.
OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.
(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns )
We can't help. The product is not the same and the API are not the same anymore.
I'd recommend to switch to the real Elasticsearch. Think about what is there yet like Security, Monitoring, Reporting, SQL, Canvas, Maps UI, Alerting and built-in solutions named Observability, Security, Enterprise Search and what is coming next like the new powerful ES|QL engine ...
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.