I've been asked to investigate what I thought was an upgrade of ElasticSearch/Kibana solution(both running on Docker) from a 5.2.2 instance to a 7.3.
I was looking at the upgrade instructions for ElasticSearch, and I know that if this is an upgrade then I have to go 5.2.2 -> 5.6.16 -> 6.8.2 -> 7.3.1 based on what I have seen online.
But I don't technically think this is an upgrade, as the 7.3.1 instances will not be on the same instances, so no matter what I'll probably be remote re-indexing the 5.2.2 indexes to the 7.3.1 instance.
Is this jump possible with remote re-indexing, or do I still need to do the rolling upgrade route for ElasticSearch.
The advantage of the upgrade route you described (5.2.2 -> 5.6.16 -> 6.8.2 -> 7.3.1) is that each step gives you the opportunity to check for your clients' usage of deprecated features and to use the upgrade assistant to ensure that the upgrade will run smoothly. If you do the upgrade in one step then you might have to put a lot more effort into identifying the breaking changes that will affect you.
Thanks for the info, Maybe I could validate if the stepped upgrade route would be required?
i.e. remote re-index a subset of the data in each of the indexes I'm migrating, and then use the validation api1 to confirm the work (would this show up usages of deprecated features?) If I get an error in the test, then I guess I have to go through the stepped approach.
You might get some deprecation logging from the Validate API but it won't cover everything that might have been deprecated since 5.x. Breaking changes can affect lots of other things apart from the search APIs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.