I'm having some issues with running a remote reindex from cluster a to cluster b. The end goal I'm trying to achieve is having some index pattern that the user can provide, and the remote reindex will get all the indices with that pattern from a remote cluster, and create them on the local cluster.
This should include having op_type set to "create" as it should be used to sync the clusters in case of some failure. I tried taking the script provided by elastic here: Reindex API | Elasticsearch Guide [8.5] | Elastic , and just removing the part where he adds a minus (-), but that ends up failing.
One more thing that needs to be said, I'm not sure if creating missing indices on the destination will break ILM. Do the reindexed indices still contain the same metadata ILM uses to tell the indices apart... their age, their order, and so on?
I'm aware some people created scripts for this in bash and so on, but I would primarily like to know if there's some way to overcome this with Elastic and it's API purely. If not, I can write the logic used in the bash scripts myself in Ansible.
Sorry for not giving an example. There is no actual error, what ends up happening is all the data from the source indices get written to one destination index. In this example it would be "source".
Only when I make the index name different from the source do they get replicated semi-correctly. For example, by adding a minus at the end, or any other string. By semi-correctly I mean the data is correct, but of course the names of the indices are different, which is not acceptable in this situation.
Regarding the second question, is there any way to make ILM work in this situation as on the source cluster? If it's not possible in the basic license, would it be possible using CCR?
Have you considered using the snapshot and restore APIs? These retain the index settings, but do copy the indices axactly as they are and do not allow you to change mappings, which you can do when reindexing.
Yeah I figured snapshot and restore would be my next best bet. Just wanted to confirm if there's a simpler way before doing that.
The potential issue with a snapshot and restore would be the time it takes to complete. The cluster is quite large with over 2tb ingested daily. I'll see if I can make it work. Thank you for the help Mark, and Christian.