We are looking for a solution to keep indices that are roughly 7gb per day in size for about a year. Keeping it locally on disk isn't going to work for us . My one idea is the back it up to s3 then have a separate cluster in AWS reading from the s3 bucket where we can than look at that data specifically and possibly enable cross cluster searching. Is this something that seems reasonable or is there something else that can handle this better?
Thanks,
It seems reasonable. You can use the snapshot/restore functionality along with the repository-s3
plugin to put the indices into a repository on S3, and then restore them as needed (either into the original cluster or into a new one, with which you could use cross-cluster search).
Thank you @DavidTurner . this is the way we will be going forward then.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.