Is this the correct approach of taking a snapshop of an indice which is in production?

I have 4 different elastic nodes in a cluster.

Want to take a snapshot from one of the server which is having primary shard.

Step 1: sudo systemctl stop elasticsearch

Step 2: add path.repo: ["/data/elasticbackup"] in elasticsearch.yaml file.

Step 3: Give permission to the folder path sudo chmod 777 -R /data/elasticbackup

Step 4: sudo systemctl start elasticsearch

Will it join in the cluster automatically after starting the node ? What is the immediate activity we have to perform if node join fails?

Step 5: PUT request from postman to register snapshot


	"settings": {
		"compress" : true,
		"location" : "/data/elasticbackup"

Step 6: Validate weather snapshot has been registered or not.
GET request from postman:


    "elasticbackup": {
        "type": "fs",
        "settings": {
            "compress": "true",
            "location": "/data/elasticbackup"

Step 7: Take snapshop ( around 300 GB productioncustomerdata indices )


input :
	 "indices": "productioncustomerdata"

Step 8: Delete existing primary indices - productioncustomerdata

DELETE request - indices - http://xx.xx.xx.xx:4200/productioncustomerdata

Are the above steps are correct? Or do we need to perform any other activity ?

Snapshots are cluster wide so the repository need to be made available on all nodes at the same path.

Can we not take only particular indices eg: productioncustomerdata ? In my case indices ( productioncustomerdata ) primary shard is available in server4 and replica shard is available in server3.

I thought of updating path.repo only for server2 as primary is available there.

You can take only individual indices but the repository still need to be configured on all master and data nodes.

ok understood. We have traffic 24/7, is there any possibility that we can do the above steps without downtime ?

No, that change requires a restart but you can perform a rolling one. Note that shared storage is required and need to be mounted the same across all nodes.

currently shared storage is not mounted. Every node had their own storage. Is it recommended to have a shared storage for all the nodes in ES ? Any specific reason for this ?

Shared storage is a requirement for snapshot repositories. Nodes should however store their own data on local storage.

Can we provide azure blob ? Our servers are hosted in azure vm’s.

Any documentation available if we want to provide azure blob storage ?

There is an Azure repository plugin that you can install to take snapshots to Azure blob storage.

I have tested it with azure blob storage as repo and working fine in the development env. I will perform the same in production. But before that,

We are making below api call to take the snapshot,

Provided query string wait_for_completion=true in the postman, We are taking the snapshot of indice which is having 350GB, will it really wait for the response in postman or will it request timeout? If request is timedout will the background job of taking snapshot continues ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.