That recipe has the incredibly naive assumption of having a cluster with
only two nodes. Long story short, you can't. You probably can, but it would
require a lot of work. An entire secondary cluster would need to be setup
and populated with data. In the past, I found that even two different
versions on the network can cause issues with multicast zen discovery. If
you are using the TransportClient, they would all have to be reconfigured.
Rolling upgrades was a feature proposed for 1.0, but the community has been
kept in the dark about what it actually will contain.
Cheers,
Ivan
On Mon, Jun 24, 2013 at 11:40 PM, David Pilato david@pilato.fr wrote:
The critical phase of rolling upgrades is how to suspend ongoing writes and
to redirect new writes into a temporary storage that can be replayed later
automatically. I assume this is the feature that will appear in 1.0.
The steps can be done manually like in Clinton's gist, but it depends also
on the external clients how they write to the cluster. You could force
external clients to drop their connections while migration takes place, so
they have to reconnect later. I doubt this can happen without exceptions
(and maybe errors and write data loss) on client side. By accepting
exceptions and ignoring how clients can resume smoothly, rolling upgrades
are possible. Front-end load balancers can be very helpful for switching to
the new ES cluster in zero downtime.
Jörg
On Tue, Jun 25, 2013 at 4:57 PM, Ivan Brusic ivan@brusic.com wrote:
That recipe has the incredibly naive assumption of having a cluster with
only two nodes. Long story short, you can't. You probably can, but it would
require a lot of work. An entire secondary cluster would need to be setup
and populated with data. In the past, I found that even two different
versions on the network can cause issues with multicast zen discovery. If
you are using the TransportClient, they would all have to be reconfigured.
Rolling upgrades was a feature proposed for 1.0, but the community has been
kept in the dark about what it actually will contain.
Cheers,
Ivan
On Mon, Jun 24, 2013 at 11:40 PM, David Pilato david@pilato.fr wrote:
Re-reading my post, I sounded a bit negative. Rolling upgrades is hard
primarily due to the fact that cross-version clusters are not permitted.
The binary protocol is version specific, so you not only have issues
between cluster nodes, but between the client and server if you are using
the Java clients.
On non-trivial clusters, you can "decommission" nodes using the cluster
update API. Sematext has a good blog post about it:
If using the Java clients, you might need to setup load balancing nodes to
help the transition.
I'm still hoping that rolling upgrades makes it into 1.0
The critical phase of rolling upgrades is how to suspend ongoing writes
and to redirect new writes into a temporary storage that can be replayed
later automatically. I assume this is the feature that will appear in 1.0.
The steps can be done manually like in Clinton's gist, but it depends also
on the external clients how they write to the cluster. You could force
external clients to drop their connections while migration takes place, so
they have to reconnect later. I doubt this can happen without exceptions
(and maybe errors and write data loss) on client side. By accepting
exceptions and ignoring how clients can resume smoothly, rolling upgrades
are possible. Front-end load balancers can be very helpful for switching to
the new ES cluster in zero downtime.
Jörg
On Tue, Jun 25, 2013 at 4:57 PM, Ivan Brusic ivan@brusic.com wrote:
That recipe has the incredibly naive assumption of having a cluster with
only two nodes. Long story short, you can't. You probably can, but it would
require a lot of work. An entire secondary cluster would need to be setup
and populated with data. In the past, I found that even two different
versions on the network can cause issues with multicast zen discovery. If
you are using the TransportClient, they would all have to be reconfigured.
Rolling upgrades was a feature proposed for 1.0, but the community has been
kept in the dark about what it actually will contain.
Cheers,
Ivan
On Mon, Jun 24, 2013 at 11:40 PM, David Pilato david@pilato.fr wrote:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.