I have ELK running in a docker. Directory in my home dir is mounted as volume in the docker and was defined as repo_path in elasticsearch.
The permissions are set correctly and folder owner is elasticsearch.
When I try to create backup repository for snapshot using the following command, I get error "Disk quota exceeded"
What quota is it? Where is it defined? I have plenty of diskspace in my home dir, which is in fact the real place where it is located.
$ curl -u "elastic:password" -s -XPUT http://elasticsearch:9200/_snapshot/backup_repository -H "Content-Type: application/json" -d '{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/snapshot/backup_repository"
}
}'
{"error":{"root_cause":[{"type":"repository_exception","reason":"[backup_repository] cannot create blob store"}],"type":"repository_verification_exception","reason":"[backup_repository] path is not accessible on master node","caused_by":{"type":"repository_exception","reason":"[backup_repository] cannot create blob store","caused_by":{"type":"file_system_exception","reason":"/usr/share/elasticsearch/snapshot/backup_repository: Disk quota exceeded"}}},"status":500}[root@54b5f4a7d28f elasticsearch]
varunpappu
(Varun Subramanian)
July 28, 2021, 10:45am
2
Hi Nicole,
Has the location been added under path.repo in the elasticsearch.yaml file of all the nodes? I think it might be the reason for the error.
the top level folder is defined in elasticsearch.yml file:
path.repo: ["/usr/share/elasticsearch/snapshot"]
varunpappu
(Varun Subramanian)
July 28, 2021, 11:41am
4
If path.repo is added to all the nodes in the cluster then I believe you need to update your curl request.
{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/snapshot"
}
In your request you have added the following
("location": "/usr/share/elasticsearch/snapshot/**backup_repository**")
I tried that, but the result was the same:
curl -u "elastic:passwd" -s -XPUT http://elasticsearch:9200/_snapshot/backup_repository -H "Content-Type: application/json" -d '{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/snapshot"
}
}'
{"error":{"root_cause":[{"type":"exception","reason":"failed to create blob container"}],"type":"repository_verification_exception","reason":"[backup_repository] path is not accessible on master node","caused_by":{"type":"exception","reason":"failed to create blob container","caused_by":{"type":"file_system_exception","reason":"/usr/share/elasticsearch/snapshot/tests-TcFMTcznRQaK3r1bMzlXug: Disk quota exceeded"}}},"status":500}
varunpappu
(Varun Subramanian)
July 28, 2021, 11:51am
6
Can you run the following
GET /_snapshot/_all or GET /_snapshot/
in the dev console and see if the snapshot is registered.
Hope you have added the path repo in all the nodes in the cluster and restarted the nodes.
It looks like it was created despite the error:
curl -u "elastic:passwd" -s -GET http://elasticsearch:9200/_snapshot/
{"backup_repository":{"type":"fs","settings":{"location":"/usr/share/elasticsearch/snapshot"}}}
varunpappu
(Varun Subramanian)
July 28, 2021, 12:00pm
8
Thats good, we now have the backup repo. Is this being run in local or some EC2 instance if I may know?
Can you try a simple backup of an index and see if it is working?
Unfortunately it fails with the same error regarding disk quota and indeed no output is created in the snapshot folder
curl -u "elastic:passwd" -s -XPUT http://elasticsearch:9200/_snapshot/backup_repository/snapshot_1?wait_for_completion=true -H "Content-Type: application/json" -d '{
"indices": "*",
"ignore_unavailable": true,
"include_global_state": false,
"metadata": {
"taken_by": "user123",
"taken_because": "backup before upgrading"
}
}'
{"error":{"root_cause":[{"type":"snapshot_exception","reason":"[backup_repository:snapshot_1/8cN-oUJ1QGu1swz0dZL1Yw] failed to update snapshot in repository"}],"type":"snapshot_exception","reason":"[backup_repository:snapshot_1/8cN-oUJ1QGu1swz0dZL1Yw] failed to update snapshot in repository","caused_by":{"type":"file_system_exception","reason":"/usr/share/elasticsearch/snapshot/meta-8cN-oUJ1QGu1swz0dZL1Yw.dat: Disk quota exceeded","suppressed":[{"type":"exception","reason":"failed to create blob container","caused_by":{"type":"file_system_exception","reason":"/usr/share/elasticsearch/snapshot/indices: Disk quota exceeded"}},{"type":"file_system_exception","reason":"/usr/share/elasticsearch/snapshot/snap-8cN-oUJ1QGu1swz0dZL1Yw.dat: Disk quota exceeded"}]}},"status":500}
varunpappu
(Varun Subramanian)
July 28, 2021, 12:14pm
10
We can do two things now
If you are running docker in local can you increase the disk space for the docker.
Can you run GET _cluster/settings
and see if there are any setting for disk watermark set.
Try creating a simple index with a single document and try taking a backup of it, just to see if its related to the size of the index.
Most probably this is related to docker issue is what i feel and also if the size of the index is huge it might fail.
There is a lot of diskspace 192GB in the place where docker is stored
GET _cluster/settings returns the following output
{"persistent":{},"transient":{}}
nothing related to disk
I'm trying to backup an empty index, so it is not related to the size.
Maybe it is related somehow to the fact that snapshot area is a mounted volume, I need to check it in a different configuration and see if it works.
varunpappu
(Varun Subramanian)
July 28, 2021, 12:34pm
12
Great if you are able to backup an empty index. Yes might be a mounted volume issue as well.
You can run df -h
on a ubuntu machine and see the necessary info.
Keep me posted on how it goes. Thanks
thank you for all the suggestions, eventually it was a problem with permissions of the folder.
The folder I created didn't have write permissions to the group but owned by elasticsearch group. So that was the reason for this problem. I set the folder permissions to 770 and ownership to root:elasticsearch and it started working
1 Like
system
(system)
Closed
August 25, 2021, 1:47pm
14
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.