Does snapshot support nfs? always Access Denied

es/logstash/kibana version: 5.4.1

https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-snapshots.html#modules-snapshots

I follow the official guide to create a snapshot with NFS shared file system, but always failed.

it tells "Access Denied...", I have chown and chmod the ownship of the shared directory.

when I register a repo

PUT /_snapshot/my_backup
{
  "type": "fs",
  "settings": {
        "compress": true,
        "location": "/nh/esbk/my_backup"
  }
}
{
  "error": {
    "root_cause": [
      {
        "type": "repository_verification_exception",
        "reason": "[my_backup] [[4BhIJY_dRxC6LpJj1mXYQw, 'RemoteTransportException[[es01][192.168.3.56:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[my_backup] store location [/nh/esbk/my_backup] is not accessible on the node [{es01}{4BhIJY_dRxC6LpJj1mXYQw}{jUg-BTKYSaSzSe5UsMb3Gg}{192.168.3.56}{192.168.3.56:9300}{ml.enabled=true}]]; nested: AccessDeniedException[/nh/esbk/my_backup/tests-SgnyVrJcQt6VCeWMVSp3BQ/data-4BhIJY_dRxC6LpJj1mXYQw.dat];'], [m69fD0RlQym-79YhgBeSQg, 'RemoteTransportException[[es02][192.168.3.49:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[my_backup] store location [/nh/esbk/my_backup] is not accessible on the node [{es02}{m69fD0RlQym-79YhgBeSQg}{9QILs1x_SnmOJ3ed4X65Bg}{192.168.3.49}{192.168.3.49:9300}{ml.enabled=true}]]; nested: AccessDeniedException[/nh/esbk/my_backup/tests-SgnyVrJcQt6VCeWMVSp3BQ/data-m69fD0RlQym-79YhgBeSQg.dat];']]"
      }
    ],
    "type": "repository_verification_exception",
    "reason": "[my_backup] [[4BhIJY_dRxC6LpJj1mXYQw, 'RemoteTransportException[[es01][192.168.3.56:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[my_backup] store location [/nh/esbk/my_backup] is not accessible on the node [{es01}{4BhIJY_dRxC6LpJj1mXYQw}{jUg-BTKYSaSzSe5UsMb3Gg}{192.168.3.56}{192.168.3.56:9300}{ml.enabled=true}]]; nested: AccessDeniedException[/nh/esbk/my_backup/tests-SgnyVrJcQt6VCeWMVSp3BQ/data-4BhIJY_dRxC6LpJj1mXYQw.dat];'], [m69fD0RlQym-79YhgBeSQg, 'RemoteTransportException[[es02][192.168.3.49:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[my_backup] store location [/nh/esbk/my_backup] is not accessible on the node [{es02}{m69fD0RlQym-79YhgBeSQg}{9QILs1x_SnmOJ3ed4X65Bg}{192.168.3.49}{192.168.3.49:9300}{ml.enabled=true}]]; nested: AccessDeniedException[/nh/esbk/my_backup/tests-SgnyVrJcQt6VCeWMVSp3BQ/data-m69fD0RlQym-79YhgBeSQg.dat];']]"
  },
  "status": 500
}

but I can get the repo after the failure

GET /_snapshot/my_backup
{
  "my_backup": {
    "type": "fs",
    "settings": {
      "compress": "true",
      "location": "/nh/esbk/my_backup"
    }
  }
}

also I can create a snapshot in the repo

PUT /_snapshot/my_backup/snapshot_1
{
  "accepted": true
}

But when I get the information of the snapshot the on the Dev Tools of Kibana, all the processes failed, here is part of the logs of the snapshot:

{
  "index": "zixun-nginx-access-2017.07.11",
  "index_uuid": "zixun-nginx-access-2017.07.11",
  "shard_id": 0,
  "reason": "IndexShardSnapshotFailedException[Failed to snapshot]; nested: ElasticsearchException[failed to create blob container]; nested: AccessDeniedException[/nh/esbk/my_backup/indices/DSfpzIRoRF67Ya7ZypAkkA/0]; ",
  "node_id": "4BhIJY_dRxC6LpJj1mXYQw",
  "status": "INTERNAL_SERVER_ERROR"
},        

By the way, if I backup the indexes, can I remove the original logs?

have you added the path.repo setting as mentioned here. Can you write with the same user as elasticsearch is running on that NFS share?

Yes, I have add path.repo: ["/nh/esbk"] in the config file /etc/elasticsearch/elasticsearch.yml, and restart es on all the 3 nodes.

I install nfs-server on one of the ES nodes, and the other nodes use mount.nfs 192.168.3.56:/nh/esbk/my_backup /nh/esbk/my_backup to connect to the nfs-server.

How can I check ES is running on the NFS server?

The question is, if the NFS share is writable, when you are logged in as the user who is also running elasticsearch.

On top of that is there a full stack trace in the logs maybe?

Yes, there is a full stack strace on the top of the logs.

Maybe it is the problem with nfs, I have research reference about the permission of nfs, I will set it to all_squash,sanonuid=0,anongid=0 and have another try....thank u very much!

emm...
By the way, if I backup the indexes, can I remove the original logs?

Can I remove the original logs after backup the indexes?

logs have nothing to do with indices, you can always delete them. Log files are not data. Or are you referring to something else?

Also, always provide the full stack trace when asking questions and it is available, it's much easier to debug issues.

Thanks a lot!

OK, I will provide the full stack strace next time.

Succeeded, thanks a lot !

GET /_snapshot/my_backup/snapshot_1
{
  "snapshots": [
    {
      "snapshot": "snapshot_1",
      "uuid": "HTQkyVrFSBmFC3oDZ6TXcg",
      "version_id": 5040199,
      "version": "5.4.1",
      "indices": [
        ".monitoring-kibana-2-2017.07.05",
        "zixun4-nginx-access-2017.07.08",
        "zixun-nginx-access-2017.07.12",
        "nginx-zixun-2017.07.07",
        "zixun1-nginx-access-2017.07.09",
        ".monitoring-kibana-2-2017.07.07",
        ".watcher-history-3-2017.07.04",
        ".watcher-history-3-2017.07.07",
        ".monitoring-es-2-2017.07.05",
        ".monitoring-data-2",
        "zixun-nginx-access-2017.07.10",
        ".monitoring-kibana-2-2017.07.10",
        ".monitoring-es-2-2017.07.08",
        "zixun-nginx-access-2017.07.11",
        "zixun4-nginx-access-2017.07.10",
        ".monitoring-logstash-2-2017.07.12",
        ".monitoring-es-2-2017.07.10",
        "nginx-zixun-2017.07.08",
        "nginx-opgirl-2017.07.07",
        ".watcher-history-3-2017.07.12",
        ".watcher-history-3-2017.07.03",
        ".watcher-history-3-2017.07.01",
        ".watcher-history-3-2017.07.11",
        "zixun3-nginx-access-2017.07.09",
        ".watcher-history-3-2017.06.29",
        ".watcher-history-3-2017.06.28",
        ".security",
        ".monitoring-es-2-2017.07.11",
        ".watcher-history-3-2017.06.30",
        "zixun1-nginx-access-2017.07.10",
        "zixun2-nginx-access-2017.07.08",
        ".monitoring-kibana-2-2017.07.12",
        ".watcher-history-3-2017.07.10",
        "test",
        ".monitoring-logstash-2-2017.07.09",
        "zixun2-nginx-access-2017.07.09",
        ".monitoring-es-2-2017.07.06",
        ".monitoring-kibana-2-2017.07.11",
        "logstash-nginx-access-2017.06.22",
        ".monitoring-logstash-2-2017.07.07",
        "filebeat-2017.07.07",
        ".kibana",
        ".watcher-history-3-2017.06.27",
        ".monitoring-logstash-2-2017.07.10",
        ".watcher-history-3-2017.07.06",
        ".monitoring-kibana-2-2017.07.09",
        "zixun1-nginx-access-2017.07.08",
        ".monitoring-alerts-2",
        ".monitoring-logstash-2-2017.07.08",
        ".watcher-history-3-2017.07.05",
        "nginx2-zixun-access-2017.07.07",
        ".monitoring-kibana-2-2017.07.08",
        ".monitoring-logstash-2-2017.07.11",
        ".triggered_watches",
        ".watcher-history-3-2017.07.08",
        ".monitoring-es-2-2017.07.12",
        ".watcher-history-3-2017.07.09",
        ".monitoring-es-2-2017.07.09",
        "logstash-nginx-access-2017.06.23",
        "zixun4-nginx-access-2017.07.09",
        "zixun3-nginx-access-2017.07.08",
        "zixun3-nginx-access-2017.07.10",
        "logstash-nginxs1-access-log-2017.06.21",
        ".monitoring-kibana-2-2017.07.06",
        "zixun2-nginx-access-2017.07.10",
        "logstash-nginxs1-access-log-2017.06.20",
        ".monitoring-es-2-2017.07.07",
        ".watches",
        ".watcher-history-3-2017.07.02"
      ],
      "state": "SUCCESS",
      "start_time": "2017-07-12T03:01:43.946Z",
      "start_time_in_millis": 1499828503946,
      "end_time": "2017-07-12T03:05:33.513Z",
      "end_time_in_millis": 1499828733513,
      "duration_in_millis": 229567,
      "failures": [],
      "shards": {
        "total": 169,
        "failed": 0,
        "successful": 169
      }
    }
  ]
}
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.