About Rolling upgrade problem

I upgraded from 6.5 to 6.5.1, we have 3node.

I upgrade according to official documents
https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html

But the results are different from what is shown in the document.

The document says:

Before upgrading the next node, wait for the cluster to finish shard allocation. You can check progress by submitting a_cat/health request:

When I upgrade the first node, I waited a long time and the node was not synchronized.

Strangely, not all indexes are not synchronized, but only one synchronization stuck.

like this:

I try to upgrade the second node directly regardless of the synchronization state.

I try to upgrade the node "node2" directly regardless of the synchronization status. Noe1 synchronization is normal and Noe2 is not synchronized.

After I upgrade Noe3, Noe2 will synchronize and Noe3 will remain unsynchronized.

When I turn off all three nodes and open them again, it's amazing to find that all nodes are synchronized.

Excuse me, where did this problem arise?

Very few error messages in the log

Caused by: org.elasticsearch.action.UnavailableShardsException: [.monitoring-es-6-2019.08.06][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-6-2019.08.06][0]] containing [index {[.monitoring-es-6-2019.08.06][doc][xNXmZWwB8Aepy46ijY3V], source[{"cluster_uuid":"kiomBtJqSCiFTFxjCRsA6A","timestamp":"2019-08-06T07:48:24.122Z","interval_ms":10000,"type":"node_stats","source_node":{"uuid":"on2XxWbZTCGG1rjax8qhMA","host":"172.17.80.215","transport_address":"172.17.80.215:9300","ip":"172.17.80.215","name":"node-2","timestamp":"2019-08-06T07:48:24.120Z"},"node_stats":{"node_id":"on2XxWbZTCGG1rjax8qhMA","node_master":false,"mlockall":true,"indices":{"docs":{"count":0},"store":{"size_in_bytes":0},"indexing":{"index_total":0,"index_time_in_millis":0,"throttle_time_in_millis":0},"search":{"query_total":0,"query_time_in_millis":0},"query_cache":{"memory_size_in_bytes":0,"hit_count":0,"miss_count":0,"evictions":0},"fielddata":{"memory_size_in_bytes":0,"evictions":0},"segments":{"count":0,"memory_in_bytes":0,"terms_memory_in_bytes":0,"stored_fields_memory_in_bytes":0,"term_vectors_memory_in_bytes":0,"norms_memory_in_bytes":0,"points_memory_in_bytes":0,"doc_values_memory_in_bytes":0,"index_writer_memory_in_bytes":0,"version_map_memory_in_bytes":0,"fixed_bit_set_memory_in_bytes":0},"request_cache":{"memory_size_in_bytes":0,"evictions":0,"hit_count":0,"miss_count":0}},"os":{"cpu":{"load_average":{"1m":0.0,"5m":0.06,"15m":0.06}},"cgroup":{"cpuacct":{"control_group":"/","usage_nanos":1216118268108},"cpu":{"control_group":"/","cfs_period_micros":100000,"cfs_quota_micros":-1,"stat":{"number_of_elapsed_periods":0,"number_of_times_throttled":0,"time_throttled_nanos":0}},"memory":{"control_group":"/","limit_in_bytes":"9223372036854771712","usage_in_bytes":"3098001408"}}},"process":{"open_file_descriptors":265,"max_file_descriptors":65536,"cpu":{"percent":0}},"jvm":{"mem":{"heap_used_in_bytes":157111272,"heap_used_percent":14,"heap_max_in_bytes":1056309248},"gc":{"collectors":{"young":{"collection_count":14,"collection_time_in_millis":775},"old":{"collection_count":2,"collection_time_in_millis":150}}}},"thread_pool":{"generic":{"threads":4,"queue":0,"rejected":0},"get":{"threads":0,"queue":0,"rejected":0},"index":{"threads":0,"queue":0,"rejected":0},"management":{"threads":2,"queue":0,"rejected":0},"search":{"threads":0,"queue":0,"rejected":0},"watcher":{"threads":0,"queue":0,"rejected":0},"write":{"threads":2,"queue":0,"rejected":0}},"fs":{"total":{"total_in_bytes":53660876800,"free_in_bytes":45866979328,"available_in_bytes":45866979328},"io_stats":{"total":{"operations":261,"read_operations":0,"write_operations":261,"read_kilobytes":0,"write_kilobytes":7579}}}}}]}]]
        ... 12 more
[2019-08-06T15:49:24,164][WARN ][o.e.x.m.MonitoringService] [node-2] monitoring execution failed
org.elasticsearch.xpack.monitoring.exporter.ExportException: Exception when closing export bulk
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1$1.<init>(ExportBulk.java:95) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1.onFailure(ExportBulk.java:93) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:206) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:200) ~[?:?]
        at org.elasticsearch.xpack.core.common.IteratingActionListener.onResponse(IteratingActionListener.java:115) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:164) ~[?:?]
        at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:68) ~[elasticsearch-6.5.1.jar:6.5.1]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:135) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:111) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:156) ~[?:?]
        ... 26 more
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: bulk [default_local] reports failures when exporting documents
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:124) ~[?:?]
        ... 24 more

I found this sentence in the document

During a rolling upgrade, primary shards assigned to a node running the new version cannot have their replicas assigned to a node with the old version. The new version might have a different data format that is not understood by the old version.

If it is not possible to assign the replica shards to another node (there is only one upgraded node in the cluster), the replica shards remain unassigned and status stays yellow .

In this case, you can proceed once there are no initializing or relocating shards (check the init and relo columns).

As soon as another node is upgraded, the replicas can be assigned and the status will change to green

But the question is, how does the last server change from yellow to green?
My program was installed in version 6.5, not upgraded from the old version.