Huge start value brought down the whole cluster

Hi,

We are running es cluster version 1.5.2. We have several billions of documents in one index. The cluster performance well in normal condition. But we found if we query the index with normal limit but a huge start value eg. several millions or several billions, the memory usage of every node in the cluster soars and then all nodes went into full gc one by one and the whole cluster went down. Is this a expected behavior? How can we avoid that?

Don't have a huge start value. In versions like 2.x and onwards by default
the query fails when the value in >10000. Use scroll for scrolling across
many results.