On upgrading from 2.1 > 2.3 (rolling & breaking mapping changes)

Rolling Upgrade

According to the table it should be possible to perform a rolling upgrade from 2.1.1 > 2.3.0. The steps to perform a Rolling upgrade seem straightforward.

But this warning stands out: "If it is not possible to assign the replica shards to another node with the higher version — e.g. if there is only one node with the higher version in the cluster — then the replica shards will remain unassigned and the cluster health will remain status yellow."

I'm a little unclear how then can it possible to do a do a rolling upgrade? If I have 8 nodes and upgrade Node-1, doesn't this mean the cluster will stay Yellow until there are at least enough upgraded nodes to fulfill the replication strategy? If I shutdown and upgrade multiple nodes the cluster state will be Red. What happens to replica shards already on the other nodes?

Breaking Changes

[2.2] "The geo_point format has been changed to reduce index size and the time required to both index and query geo point data. To make these performance improvements possible both doc_values and coerce are required and therefore cannot be changed. For this reason the doc_values and coerce parameters have been removed from the geo_point field mapping."

We have several active indices with the mapping:

"location" :  {
    "type" : "geo_point",
    "lat_lon": true,
    "doc_values": true,
    "fielddata" : {
        "format" : "compressed",
        "precision" : "3m"
    }
}

What would happen to those indices if we upgrade to 2.3?

Clients

Do we have to upgrade our Java client apps (which are using the transport client) at the same time - or can a 2.1.1 client connect without issues to a 2.3.0 server?

I am not 100% sure of this but i think the cluster will remain yellow as long as it is ONLY REPLICATION shards that are unassigned.
The cluster will only go red if actual data shard is missing but not on replications.

I have a 3 node dev cluster with one client, one master and one data node. it always stays in yellow if I create a replica.
All replicas are unassigned as there is only one data node. So if I create a new index with 3 shards and 1 replica ALL of three of the replicas will be unassigned and the cluster is yellow but still functions and never goes red as long as the 3 nodes are good.

Additionally I commented on this so I can see others input as well.

Mark

Bottom line here is that ES will never allows the situation where a primary is on a new node and the replica is on an old. On the flip side - primaries on old nodes and and replicas on new nodes is fine. Typically what will happen is that when you upgrade a node - all primaries will be on other nodes and that new upgraded nodes will be used for replicas. This will continue until the cluster is fully upgraded. The statement is mostly about indices created during the upgrade, which may have their primaries assigned to the first upgraded node. In that ES will have to wait in yellow until more nodes are upgraded.

Yes. You can leave your clients on 2.1.1 (but do upgrade as soon as you can).

Since the new format needs doc values, I suspect you won't have any problems.

Thanks, that makes sense. I have a script to perform a rolling restart for maintenance/config changes. It waits for the cluster state to be green after each node is restarted - so it won't work as-is in this situation.