Upgrade Elasticsearch 8.6.0 into new environment

Hi All,

currently I have elasticsearch version 8.6.0 with 2 elasticsearch nodes, 1 kibana node, 1 fleet server and 1 logstash node with each installed on RHEL 7.9 OS. I want to upgrade ELK to version 8.10 but in the new infrastructure, namely RHEL 8. I have read the documentation [Migrating data | Elasticsearch Service Documentation | Elastic](Migrating Data) and found 3 viable options.

so that configurations such as alert rules, role mapping, current log and agent policy are not lost, which option is the best for me to do??

or is it possible if I use the "add new node to an existing cluster to then replace the existing node" method?
as per the following documentation: Add and Remove node in your Cluster

Thank you to anyone who can provide advice and help with my problem.


My understanding of your situation:

  • you have a running cluster on version 8.6.0
  • the cluster runs on RHEL7.9 OS nodes.

and you want to go to

  • a running cluster on version 8.10.x
  • upgrade the os to RHEL8
  • don't lose any data
  • keep your alerts,etc (saved objects)

I have not read the provided link, but my approach would be:

  1. create new RHEL 8 nodes
  2. Install elastic 8.6.0 on the new nodes and have them join your cluster
  3. repeat for fleet and kibana
  4. shutdown your RHEL7.9 nodes one-by one

you now have upgraded the OS to RHEL8

  1. Upgrade the cluster to 8.10

A note on step 4, you need to make sure your indices move over correctly to the new nodes. There are some steps to do this which you can look into.

This approach will first upgrade your OS by introducing new nodes into the cluster, it then decommissions the old nodes and finally you upgrade the cluster. This has the benefit of not having to bother by migrating data as your cluster remains the same active one.

1 Like

Hi Sholzhauer,

Thank you for your response, I have tried to join the new node with the old node but the cluster status changed to RED and the Kibana portal became not ready yet.

I think there is a shards allocation error,

I still don't understand how to determine that 1 new node will replace the old master node, and 1 other new node will replace the old data node?
in this case I use a self sign certificate (not using auto-configured for security).



I have added a new node to the old cluster, is there any additional configuration I need to do? Or do you immediately do step 4? Will the shards and indices running on the old node be damaged or lost if I disable them?

Thank you in advance for your response.