IndexFormatTooNewException after bringing node upgraded 2.0.0 -> 2.3.2 into cluster

This is a rescoping of a question I posted in Node upgraded 2.0.0 to 2.3.2 can’t communicate with other nodes in cluster. I think I have narrowed down the cause, and wanted a clean thread to cover it.

Our cluster is running on ES 2.0.0 (which is on Lucene 5.2.1). When I shut down the node, and bring it back up with ES 2.3.2 (which is on Lucene 5.5.0), the logs on the master get flooded with:

Caused by: org.apache.lucene.index.IndexFormatTooNewException: Format version is not supported (resource BufferedChecksumIndexInput(SimpleFSIndexInput(path="F:\data\elasticsearch\nodes\0\indices\ml_v7\2\index\segments_1ra"))): 6 (needs to be between 0 and 5)

My guess is this is happening because after bringing the node up, I reenable allocation as instructed in step 5 of the Rolling Upgrades document, but allocation can't happen between these versions. This seems to be verified in this issue: Elasticsearch shouldn't try to balance shards from nodes with newer version of lucene to nodes with older versions of lucene.

What should the workflow be here? Do I need to upgrade all of my nodes and bring them on line together? Or just keep allocation to 'none' until all nodes are complete?

It is imperative that I don't disrupt the queries on our production site while this is going on, so I want to know what the process should be. 2.0.0 to 2.3.2 is listed as supported for Rolling Upgrade.

thanks,
~john