Upgrading results in red cluster

I've got a 3 node setup i produktion 2xdata, 1xclient node. I do rolling deployment using Octopus Deploy, but in Production I often end up with a red cluster. The deployment pipeline takes a single node at a time and does the following

  1. Ensure green cluster status
  2. Take node out of NLB
  3. Check for pending reboots
  4. Shutdown node (disable shard reallocation and do synced flush)
  5. Shutdown Elasticsearch service (Windows) and install new Elasticsearch version. Start the service again
  6. Wait until local Elasticsearch node is alive
  7. Enable shard allocation
  8. Install Elasticsearch plugins
  9. Wait until cluster is green.
  10. Put node back into NLB

The result is (often) that I get a bunch of shards that is not located on either nodes, hence red cluster, despite I wait for green cluster before continuing to the next node. What am I missing here?

UPDATE: It seems that it's old indexes prior 6 months that suddenly started to pop up (we create an index per day), and the cluster thinks they should be present in the cluster, but is deleted by our retention script. How come they show up after a reboot of a node?

In this particular case I was upgrading from 2.3.2 to 2.4.1.

What version are you on?