Is this a bug at DateHistogram aggregation?

I was reading the source code of histogram aggregation and found this.

// server/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalDateHistogram.java 

// (line 358)
    private void addEmptyBuckets(List<Bucket> list, AggregationReduceContext reduceContext) {
        /*
         * Make sure we have space for the empty buckets we're going to add by
         * counting all of the empties we plan to add and firing them into
         * consumeBucketsAndMaybeBreak.
         */
        class Counter implements LongConsumer {
            private int size;

            @Override
            public void accept(long key) {
                size++;
                if (size >= REPORT_EMPTY_EVERY) {
                    reduceContext.consumeBucketsAndMaybeBreak(size);
                    size = 0;
                }
            }
        }
        Counter counter = new Counter();
        iterateEmptyBuckets(list, list.listIterator(), counter);
        reduceContext.consumeBucketsAndMaybeBreak(counter.size);

        InternalAggregations reducedEmptySubAggs = InternalAggregations.reduce(emptyBucketInfo.subAggregations, reduceContext);
        ListIterator<Bucket> iter = list.listIterator();
        iterateEmptyBuckets(list, iter, new LongConsumer() {
            private int size = 0;

            @Override
            public void accept(long key) {
                size++;
                if (size >= REPORT_EMPTY_EVERY) {
                    reduceContext.consumeBucketsAndMaybeBreak(size);
                    size = 0;
                }
                iter.add(new InternalDateHistogram.Bucket(key, 0, format, reducedEmptySubAggs));
            }
        });
    }

It seems that the first iterateEmptyBuckets call is to make sure we won’t have too many empty buckets that causes OOM (by calling reduceContext.consumeBucketsAndMaybeBreak), and the second iterateEmptyBuckets call is the one that actually puts empty buckets into the result.

My question is, why is reduceContext.consumeBucketsAndMaybeBreak called in both the first and second call to iterateEmptyBuckets? Shouldn’t we only keep this in the first call to iterateEmptyBuckets but not the second one?