Snapshot and path.repo on one cluster node


(Guillaume Renard) #1

How can I create a snapshot on my kibana index with a cluster of 4 nodes.
I have created a /data/bakcup folder on my kibana host, then add path.repo=["/data/backup"], restarted ES; Then I try to
PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {
"location": "/data/backup"
}
}

but get
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my_backup] location [/data/backup] doesn't match any of the locations specified by path.repo because this setting is empty"
}
],
"type": "repository_exception",
"reason": "[my_backup] failed to create repository",
"caused_by": {
"type": "repository_exception",
"reason": "[my_backup] location [/data/backup] doesn't match any of the locations specified by path.repo because this setting is empty"
}
},
"status": 500
}

I don't have shared filesystem between my node and i'm using 6.4.2 stack.

Thanks for your help.


#2

Hello
I have the same situation (4 nodes with 2 data nodes).
On both data nodes, I have created the folder for the snapshots and restart elasticsearch, but I have the same error message as you.
I am wondering if the folder has to be shared by all the node, I have not yet tested.


(David Pilato) #3

Yes it must be a shared folder.


(David Pilato) #4

You can always use Kibana UI to export your objects.


#5

Thank you for your reply.
if I remove the path.repo in one of 2 .yml file configuration, it's possible ? What is the impact ?


(David Pilato) #6

No this won't work. Path must be set on all nodes and must be shared.

But why do you want to use Snapshot and restore ?


#7

Ok !
In my case, the purpose is to rename indices, so I have to take snapshots and restore with a new name.


#8

Just 2 more questions

the path.repo has to be indicated in all nodes or just the data nodes ?

I have set the path.repo in my 2 data nodes, mount the nfs path on my folder, it's not working.

This morning, when I have restarted elasticsearch after set the path.repo, the state of the cluster was red, because the replica shard was to 0 until all shard was replicated.

How to avoid that ?


(David Pilato) #9

You can use reindex API may be?


(David Pilato) #10

the path.repo has to be indicated in all nodes or just the data nodes ?

All data and master elligible nodes IIRC.

How to avoid that ?

You can set one replica and do a rolling restart instead.


#11

thank you very much for your answers ; I am going to test the reindex API with some fakes indices.

For the rolling restart, what do you mean by set "one replica" ?

The documentation says to set "none" :
https://www.elastic.co/guide/en/elasticsearch/guide/master/_rolling_restarts.html


(David Pilato) #12

Not the same option. The one I'm speaking about is number_of_replicas In index settings.


#13

I have done this :
reindex API on all the indices I wanted to rename : no problem, I see the new indices with the same number of documents than the original indices.

FIY, I have renamed the indices filebeat-6.3.0-YYYY.mm.dd to filezilla-logs-YYYY.mm.dd

When I change the source of a saved searched to have the informations from the new indices, I have a error message :

Could not locate that index-pattern (id: filezilla-logs-*),

But this pattern exisits, I have created this once I have renamed the indices.

I don't undestand what's wrong.

I have also created a new search from the new index and I saved it. When I read it, the index is a serial number : 2a61ec40-e2a2-11e8-a38e-69b7fe5aaa9d

Why it's not the same name as the index pattern ?

Fo winlogbeat or heartbeat, it's not a serial number.

So how I can have the saved searches with the new indices pattern without to recreated them from scratch ?

Thank you for your help.


(David Pilato) #14

I don't know but you should probably ask in #kibana instead.


(system) #15

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.