How can I create a snapshot on my kibana index with a cluster of 4 nodes.
I have created a /data/bakcup folder on my kibana host, then add path.repo=["/data/backup"], restarted ES; Then I try to
PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {
"location": "/data/backup"
}
}
but get
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my_backup] location [/data/backup] doesn't match any of the locations specified by path.repo because this setting is empty"
}
],
"type": "repository_exception",
"reason": "[my_backup] failed to create repository",
"caused_by": {
"type": "repository_exception",
"reason": "[my_backup] location [/data/backup] doesn't match any of the locations specified by path.repo because this setting is empty"
}
},
"status": 500
}
I don't have shared filesystem between my node and i'm using 6.4.2 stack.
Hello
I have the same situation (4 nodes with 2 data nodes).
On both data nodes, I have created the folder for the snapshots and restart elasticsearch, but I have the same error message as you.
I am wondering if the folder has to be shared by all the node, I have not yet tested.
the path.repo has to be indicated in all nodes or just the data nodes ?
I have set the path.repo in my 2 data nodes, mount the nfs path on my folder, it's not working.
This morning, when I have restarted elasticsearch after set the path.repo, the state of the cluster was red, because the replica shard was to 0 until all shard was replicated.
I have done this :
reindex API on all the indices I wanted to rename : no problem, I see the new indices with the same number of documents than the original indices.
FIY, I have renamed the indices filebeat-6.3.0-YYYY.mm.dd to filezilla-logs-YYYY.mm.dd
When I change the source of a saved searched to have the informations from the new indices, I have a error message :
Could not locate that index-pattern (id: filezilla-logs-*),
But this pattern exisits, I have created this once I have renamed the indices.
I don't undestand what's wrong.
I have also created a new search from the new index and I saved it. When I read it, the index is a serial number : 2a61ec40-e2a2-11e8-a38e-69b7fe5aaa9d
Why it's not the same name as the index pattern ?
Fo winlogbeat or heartbeat, it's not a serial number.
So how I can have the saved searches with the new indices pattern without to recreated them from scratch ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.