Retrieving over a million records in Elasticsearch

At this point you are better of making 3 of them master eligible and data nodes and the other 3 just data nodes.

Or the scoring. You should see if it gets faster if you sort by _doc.

It could also be fetching the _ids.

You should use the hot_threads API to see what is taking the time.

The bitsets aren't of _ids. They are at the Lucene segment level and _id is a thing Elasticsearch is inserting on top of that. Depending on your query it may not even use the cache - if it needs scores it won't. If it is super fast without the cache (term query) then it'll skip it as well.

What do you want to do with the results? Elasticsearch's aggregates were built to do interesting things with portions of the documents after apply arbitrary filters. You might have a similar problem. I mean, maybe its one that can be solved with an aggregation. Or maybe it is one that we just need to better understand.