Hi all,
To anyone having this issue, I may have found the problem, though I don't understand the underlying logic behind it.
In any case, what you'll notice in my configuration is an output plugin that writes to file information about the document that was updated / inserted. For privacy reasons, I anonymized the path to my log file. The path declared the same log file as the pipeline. I did this to consolidate information about the pipeline, to include the document information with the pipeline information, but it seems during heavy loads of documents, this may be causing problems with the pipelines ability to update the timestamp. Perhaps because the output plugin refused to let the file be closed by the pipeline, it prevent the pipeline from writing jdbc metadata to another file? I'm not sure.
I've since changed that path to a different log file and my issue has seemingly vanished. I've also attempted to blast my pipeline with thousands of documents to refresh, and it took it like a champ. For this reason, I believe that writing to the same file as the pipeline was the issue.
TL;DR: Don't have an output plugin that writes to the same log file as the pipeline itself. While it may work for low numbers of documents, during high loads it seems to prevent the pipeline from functioning correctly.