I have upgraded my cluster from old version 5.x to 6.8.5 and every night I have a Python script that, using
elasticsearch library, creates snapshot of several indices. the amount should be more or less 200 GB.
Before the upgrade, this procedure took 2-3 hours. now it takes 10 hours. is there anything that I can control?
what repository are you using for those snapshots?
What value are you using for the
max_snapshot_bytes_per_sec setting on your repository? Unless it's already close to what your network can deal with I'd try increasing this value.
I'm using a FS disk (hosted on SAN technology) and
max_snapshot_bytes_per_sec is configured to the default value.
I was trying to change it but I discovered that I have to recreate the repository. what a bad news
I don't think this should be an issue at all. You can just recreate the repository in ES, it won't affect its contents on disk. Note that removing and re-adding a repository from the ES cluster doesn't do anything to files on disk.
so, if I delete the repository and then recreate it I should be able to see again snapshot created with the old version of repository?
Yes, that should work just fine. There is no state about the contents of the repository stored in ES itself, all the metadata about what snapshots exist etc. is stored in the repository. So if you create a new repository pointing at the same path it will have all the snapshots in it.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.