I'm not sure I agree this is wrong, the green status means that the cluster has no unassigned shards, the yellow status means that the cluster has at least one unassigned shard which can lead to data loss in case of failure of a node.
Adding a delay to change the state to yellow would probably add a lot of issues. What if someone thinks that the cluster is green and remove a node because the cluster was indeed yellow but the state was delayed and it lead to data loss? I see no reason to change this behavior.
Also, is the cluster being yellow impacting something in your Elastic Stack or just the health check script you use in argo cd?
If the impact is in your argo cd that is becoming degraded because of the yellow status, then it is better and easier to fix it in your script, maybe add a delay there to change it to degraged or check it more times to confirm that the cluster is still yellow, for example, check 3 times with a 5 seconds interval between them.
I understand your sentiment as well. It "feels" like false positive from maintenance point of view.
It's especially noticeable when you add new data nodes.
The shard rebalancing time usually takes hours, so you could have yellow indicator for hours until the new nodes are fully integrated. Replica creation for new indices waits in the action queue just like other operations.
You could tweak the "cluster_concurrent_rebalance" & "node_concurrent_recoveries" to eliminate this. But I believe the default values will cause new indices to show yellow for a prolong time when new data nodes are introduced.
I also understand the reason for showing yellow cause the potential for failure is real.
The best solution in my opinion is to have "normal" operation to have different "queues" (temporarily) so the system won't be in such condition. Just allow creating replicas for new indices to exceed the concurrent node limit.
A warning loses it's purpose if it's part of the "normal" operation, especially in this case, it's avoidable. Just allow fast pass to handle index creation.