Best way to introduce multiple new NLP models to existing indexes?

I am following this documentation: Add NLP inference to ingest pipelines | Machine Learning in the Elastic Stack [8.14] | Elastic and I learned that I can set up an ingest pipeline and reindex existing indexes into a new one with it.

If we have 100s of TBs of data in cluster and want to apply the model to all, is reindexing with the pipeline (and deleting the old indexes) the best approach?

And if so, and if we need to iteratively experiment with different models side-by-side, each time a new model is introduced, it would be reindexing the whole cluster with an updated ingest pipeline?