There is a refresh thread pool and each index has a job in that pool. When one refresh finishes the job is rescheduled for a second later. It looks to me like the maximum will always be a few milliseconds longer than a second. I suspect the average will be right around a half second though. Higher throughput is going to increase those few milliseconds. As will slow disks. As will GC, but if GC is a problem you have bigger problems. The size of the index should be irrelevant except in that larger indexes might have more going on like merges and searches that can cause disk IO or CPU usage.
That all being said I haven't heard folks having trouble with refreshes falling behind. What falls behind on slow disks when you push a high throughput is merges and the way to help that is to increase the refresh interval so you make fewer small segments. Its not perfect, but its an easy nob to tune.
Elasticsearch has a backpressure mechanism to slow down indexing throughput if merges fall too far behind so you might notice that indexing itself just slows down. Elasticsearch will log something about that when it happens. And the client's won't get acknowledgements to their indexing requests are finished so they will feel the backpressure.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.