Is that good practice to manually merge sort the search result per shards?

We are having a debate internally on whether it's good practice to manually merge the search result per shards.
main index has 135 shards with single replica on 90 data nodes. The biggest query we are running is "Top N" out of M where N is 3million and M is 12million. We found ES is very slow on some of the queries.
one of our engineers suggested we do manually merge-sort, i.e. one slice call per shard, then manually implement a merge sort logic.
I am not convinced that's the right direction, feel like we are reinventing wheel, anything I missed that we could get benefit from doing manual merge? Anyone doing similar things? Thanks!

feel like we are reinventing wheel,

I agree with that impression.

Would be better first to try to fix the query if possible.
May be share it here?

@dadoonet thanks for you reply.
the key problem we are facing is deep pagination, we sort and need to persist up to 3 million records to somewhere in single or 2 blobs, any best practice to accomplish this?

the key problem we are facing is deep pagination

Don't do it or use Search After | Elasticsearch Reference [6.0] | Elastic or scroll API

we sort and need to persist up to 3 million records to somewhere in single or 2 blobs

What do you mean by "blobs"?

we do use scroll API, and we just store the 3m sorted result {id, score} into 2 separate files(sorted ids and all scores)

So what is the problem? I’m confused by the initial question I guess.

so 2 of the possible solution for top N is: (N differs, could as much as 3 million)

  1. assuming we want keep index.max_result_window to the default value 10000 to avoid blowing up heap, in order to get 3million sorted result, we could have a loop to call scroll api and append one after another.

OR
2) create an application that slice top N call into 135 calls concurrently(we have 135 shards, so 1 per shards), as a result we will get list of 135 pre-sorted list, we merge them until target N reached. during the merge, if any of the list exhausted, we should have mechanism is fetch next page of the shard call.

Any suggestion which solution is better 1 or 2? or any better solutions? Thanks

Why do you need to extract 3m documents?

Are you still indexing while doing that?

thats one of the requirement of legacy application, this requirement is not negotiable :slight_smile:

Let assume "No" for now.

this requirement is not negotiable :slight_smile:

Well. Ok.
I found myself in former companies that I have to understand the real need that users are trying to express. Like

  • User: "I want to export data to Excel".
  • Me: "Well. That's not a business need. That's the way you think you should solve your problem, but what if I come with some even more efficient that actually suit your needs?"

So, in that case, scrolling in parallel multiple parts of the data, let's say instead of extracting with one scroll call a year of data, extracting 12 months in parallel makes sense to me.

Scroll can be improved though if you sort by _doc, like:

GET /_search?scroll=1m
{
  "sort": [
    "_doc"
  ]
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.