I have following use case:
I want to install a new 7.13.2 Elasticsearch cluster on some brand new VMs and migrate the indices and global cluster state from 7.6.2 cluster into it.
The challenge is: there are logs shipped to the cluster permanently using Logstash and I do not want to have any downtime, double logs or missing a single log line after the migration.
I have already set everything up including kibana, all security settings, system indices and now I am testing different index migration scenarios.
And the current best, easiest and safest scenario I have figured out so long is the following:
- Change logstash Output of a pipeline to the new cluster nodes
- Reload logstash pipelines (SIGHUP)
- Logstash will create new index in the new cluster according to the template, aliases, ilm policy configuration
- New logs are flowing to the new cluster and not to old cluster anymore
- I create a snapshot of the index in the old cluster
- I restore the index with _restored suffix in the new cluster so all kibana index pattern will show the restored logs too
- Migration is done.
The current drawbacks of this solution:
- I rely on the logstash mechanisms so during the pipeline output switch to new elasticsearch ndoes no logs are getting dropped. This I could verify by randomly comparing log lines. But it is of course not a concrete proof.
- There will be some time where logs form the old cluster are not available in the new cluster, which is...well...not very good actually but tolerable.
Can anyone share any better way of migrating live data from one elastic cluster to another without downtimes and losing any log data?
I am thankful for any hints