Create snapshot

I have a node1, node2, node3 contract
in several places and I want to make a snapshot of a specific indicator that is available for example in node2
I made the path.repo in the configuration file for each node and when I restart the contract it is an error that it was not recognized or did not find the repository
Knowing that I created the repository file in the device on which the node node is located and restarted it and this was done successfully, but node node node1 and node3 did not work
It may be related to accessing the repository file in the jazz of node node node2
Is there a way to do that And link the rest of the contract to the warehouse?

Have you set up therepository as a shared filesystem repository that is available on the same path from all nodes in the cluster? Note that local directories on the different nodes will not work even if the path is the same.

If it is not working, what is the full error message?

1 Like

See these docs for more information, particularly:

Check for common misconfigurations using the Verify snapshot repository API and the Repository analysis API. When the repository is properly configured, these APIs will complete successfully. If the verify repository or repository analysis APIs report a problem then you will be able to reproduce this problem outside Elasticsearch by performing similar operations on the file system directly.

2 Likes

No, I didn't do that exactly.
I created a folder named db_snapshot and then I added the path of the folder to the configuration file for all nodes and when I do a restart, but it does not do so and returns an error, but I noticed that the machine in which the repository folder is located, the node in which it is located, restarts successfully

As pointed out in the docs linked to the cluster requires a shared repository, e.g. an NFS mount, that is accessible by all nodes. Local folders does not work. The cluster will validate that this is the case and I would expect this to be shown in the error messages.

1 Like

I have configured the repository correctly and the node on the machine in which the repository was created has been successfully configured but the nodes that do not have the repository return an error
That is, the creation of the warehouse is correct, but the other two devices did not access the warehouse in the second device

Okay
Is it possible to make the shot depending on one node only
That is, I do the repository in the node in which the part of the pointer that I want to make a snapshot of?

Snapshots are taken cluster wide, so if you have more than one node in the cluster local repositories will not work.

Ok
I'm going to do a shared filesystem repository and try it out.

Thank you

I have a note
When I create the same folder and the same place in all nodes, for example, node1, I made the folder for it, and also node2, node3
Do I put the path of the repository folder in one of the nodes to all nodes or each node has its own folder path only?

You need to mount the shared filesystem repository on the same path in all nodes in the cluster.

This isn't true. If you followed the docs we linked, you would have got an error describing the mistake you made. Christian's guidance is accurate, but once you've done it you must use those troubleshooting APIs to check everything is working.

1 Like

ok
I'll do it.

But the path has things that differ from one device to another, for example, the name of the device node1 The other device node2
For example
If you create a repository in a ready-made where the username or disk is node1 and in the other jazz it is node2 here it is sure that the path is different
Would it be correct if I added one of them to all the nodes? And is it recognized?

Did you read the docs Ilinked to? It states:

To register a shared file system repository, first mount the file system to the same location on all master and data nodes. Then add the file system’s path or parent directory to the path.repo setting in elasticsearch.yml for each master and data node. For running clusters, this requires a rolling restart of each node.

ok
Thank you, I understand.