Elasticsearch upgrade from 7 to 8

Hi All,

Our current cluster is running on elasticsearch version 7.17.3 and we are planning to upgrade to version 8.15.2 (hopefully an allowed version)

We want to adopt a "rolling upgrade" doing one instance at a time. We have ran the Upgrade Assistant and fixed all reported issues. A full snapshot has also been taken on local NAS.

On checking the page Upgrade Elasticsearch | Elastic Installation and Upgrade Guide [8.15] | Elastic we presume that the following steps are now needed to do the upgrade:

  1. Disable shard allocation.

  2. Stop non-essential indexing and perform a flush. (Optional)

  3. Temporarily stop the tasks associated with active machine learning jobs and datafeeds. (Optional and we dont have any)

  4. Shut down a single node.

  5. Upgrade the node you shut down.

In Step 5 above we have a compressed tar.gz file for elasticsearch-8.15.2 which we have untarred/ unzipped under a specific directory (parallel to the ver 7.17.3)

My question relates to the next 3 steps referring to "config", "data" and "logs" directory. In our case "config", "data" and “logs” point to external directories and the embedded ones are not being used.

It seems that all we need to do is to point the below symlink "current" to the new install as follows and bring up the node. Our start script points to elasticsearch version corresponding to "current" and also external “config” and “data” directories are referenced:

[tvportal@ad-ccf-ddfg ]$ cd /opt/tvportal/elasticsearch/
[tvportal@sd-afb7-1f0c elasticsearch]$ ls -trl
total 16
lrwxrwxrwx. 1 tvportal tvportal 58 Oct 31 2022 current -> /opt/tvportal/elasticsearch/elasticsearch-8.15.2
drwxr-xr-x. 9 tvportal tvportal 4096 Sep 19 2024 elasticsearch-8.15.2
drwxr-xr-x. 9 tvportal tvportal 4096 Dec 12 11:21 elasticsearch-7.17.3

Please guide if the understanding above is correct as we are doing this for the first time.

8.15.2, released Sep 26, 2024, is a curious choice on Feb 3, 2026 ! Anyways ...

Your start script does what it does, and if you were to share it, we can maybe look at it too.

You don't say anything about your cluster - how many nodes, what sort of nodes, .how many indices, how much data, .. ? Is there any chance there are indices created by 6.x in the cluster ?

I am curious as to what these issues were.

If it were me, I'd try to setup a small test cluster with 7.17.3 on it, likely on VMs, with as much in common with the prod cluster as I could configure, snapshot too create a baseline, and do same upgrade process there a few times to practice.