DEPLOY ELK STACK (with eck) using CLONED DISKS

Is it possible to deploy on kubernetes (gke) a new copy of elasticsearch using cloned disks from another elasticsearch deployment in another cluster, without conflicts?

Any workarounds ?.. I need to restore logs saved in those disks, so any path besides the one I'm talking about would be helpful and appreciated too.

Hello @Maria_Gabriela_Perez , I believe it's not possible to do what you want to achieve without conflicts or issues. The reason being, ES essentially stores all the cluster and node information on the underlying volumes mounted to it. So if you attach the same volume to another ES node, it will try to join the existing or old cluster with same specs as your older node has - leading to conflict definitely.

Instead, if you just want to move logs from one cluster to another, there are few options:

  1. Since you are interested in disk based movements, you can copy the indices directory under ${path.data}/nodes/0/ from old node to new node (assuming it's already running on separate cluster).
  2. Create a snapshot of required indices from 1st cluster and restore the indices in new cluster.
  3. Configure CCR between 2 clusters and assign the index to be followed.
1 Like

I tried this, the service crashed completely, those are index right?, what about the folder containing only saved logs (from filebeats) ??, which path should I copy?

filebeat doesn't save any logs, those are essentially pushed and stored in Elasticsearch as indices. What exactly have you tried ?

cp entire 0 folder as you mentioned.. but apparently there's something in there that belongs to the other cluster, so kibana pods crashes, and when I restart elasticsearch pods, those crash too.

I kinda figured that out recently, I didn't understand how logs were stored in the elasticsearch folders. Restoring Snapshots are just so slow, and it's giving me too many internal server errors with no further explanation, we have tons of objects to migrate and doing it by hand it's taking way too long.

Cloning disks or copying data at the file system level may have worked in very old versions of Elasticsearch, but that is no longer the case. In order to move data from one cluster to another you either need to use snapshot and restore or reindex the data from remote.

1 Like

You may want to pace up the shnapshot/ restore process by updating settings of your repository. For instance, if S3 is being used, there are some properties defined here: S3 repository | Elasticsearch Guide [8.6] | Elastic

Also, there are some properties that can be defined to pace up shard initialization as posted in this thread: Snapshot restore is very slow to get started - #7 by Guilherme_Vieira
and here: Snapshot is taking too long

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.