Is it possible to set up snapshot repo without restarting anything?

Hi,

I'm trying to export data to our internal Ceph KV store (S3 protocol) from an old cluster (6.3.0) with 21 nodes. It's really frustrating that all these steps require restarting the cluster over and over.

What I've done so far:

  1. attempted to create a S3 snapshot repo - error: unknown type "s3" because the plugin is not installed.

  2. installed plugin "repository-s3" and restart the whole cluster (some nodes do not have access to Internet, so I took some extra steps to download / upload / install the plugin).

  3. attempted to create the repo again - oops, I cannot set secret key in request body.

  4. added S3 keys with elasticsearch-keystore and restart the whole cluster because "reloadable secure settings" is only available for 6.4.0+.

  5. attempted to create the repo again - well, it seems that I must override the default S3 endpoint in elasticsearch.yml, which means I need to restart the whole cluster one more time.

  6. (haven't tried yet - I need a rest)

It feels so unreasonable that, with all those dynamic settings (e.g.: I can set the whole cluster to read-only state on the fly), I have to restart the cluster 3 times to create a snapshot repo.

the plugin always requires a restart, as there is no support for any hot reloading. The only shortcut here is to configure everything upfront (keystore plus endpoint setting) and only then restart - likely due to testing with a different small one-node cluster that has successfully exported to your custom ceph endpoint.

Maybe some sort of dry run via an API call might help, which allows for all settings to be supplied (with the exception of the reloadable keystore settings) or try to make the endpoint dynamically configurable, which would have allowed you to only have a single restart with reloadable secure settings.

Feel free to open an issue in the elasticsearch repo and explain the rationale behind it.

I've updated the settings but failed again because:

  1. the endpoint is s3.example.com and the bucket is named "es-backups"
  2. ES sends the request to "http://es-backups.s3.example.com" :cry:

After a few more attempts (and restarts), I figured out that I must specify IP as endpoint to prevent ES from misunderstanding the correct service address.