Why snapshot API requires shared file system?

My case wanted to backup only one node of a cluster, I know it is good practice to backup the whole cluster, but my ELK infrastructure do not have a shared file system, if there is no other choice, of course I will create the shared file system, but I just wonder why I can't put my backup on local disk, what is difference between backing up in local disk and shared disk? Thanks.

If it's a single node then the local FS will be fine, as there are no other nodes to write from.

So I actually can snapshot 1 node in a cluster if I specify a path pointing a local FS?
Since I cannot find any prove on Elasticsearch official document or Google search.

Yep, only for a one node cluster though.

It's not documented as we recommend clusters being 3 or more nodes.

Thanks for answering.
How about 1 out of 3 node cluster? Is it possible to snapshot 1 node in the cluster if I specify a path pointing a local FS of 1 node?
Another question is what would be the "type" for the API request? (Official site only suggest "fs" for shared FS)

PUT /_snapshot/my_fs_backup
"type": "fs",
"settings": {
"location": "/mount/backups/my_fs_backup_location",
"compress": true

Nope, all nodes need to be able to write data to the storage location.

Just use fs.

thanks for answering,
one more further question
How much size would the backup be if my index is 10G in size? Or how large I should assume the size for the backup be?

It'd be a similar size.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.