We are running es cluster version 1.5.2. We have several billions of documents in one index. The cluster performance well in normal condition. But we found if we query the index with normal limit but a huge start value eg. several millions or several billions, the memory usage of every node in the cluster soars and then all nodes went into full gc one by one and the whole cluster went down. Is this a expected behavior? How can we avoid that?