Without seeing the detail of the errors my guess is that you lost contact with one of the replicas that was servicing your query. The scroll effectively "locks" a point-in-time for your client which means a particular set of files on each shard are preserved from any background merging etc that might otherwise clear these away on a replica. If you fail to return within an allotted time window or the replica becomes disconnected your view is not preserved and we can't continue to scroll through the same files hence the failure.
That is permitted. Writes just end up in new segment files that are not in the view scope. Your client may have exceeded the timeout you stated as part of the scroll API which dictates how long its view of the data will be held to service your requests. I'm not sure how a timeout error manifests itself but it could be the loss of shard context you saw.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.