Snapshot failure in ES 2.3.3

I am using 2.3.3 ES in CENTOS 7. I am maintaining 2 node cluster both are master & data. I want to take a snapshot of the indices. I created the repo path in both nodes & created snapshot in both nodes. Is it really needed to created in both nodes???

I created /home/ES/backup/ in both nodes & given permissions in both nodes. After that I created the snapshot

PUT _snapshot/my_backup { "type": "fs", "settings": { "location": "/home/ES/backup/" } }

PUT _snapshot/my_backup/snapshot_2 { "indices": "articles,cpuload" }

GET _snapshot/my_backup/snapshot_2

After creating snapshot I checked it. But it showing as 3 shards are failed. Why these 3 shards are failed.

{ "snapshots": [ { "snapshot": "snapshot_2", "version_id": 2030399, "version": "2.3.3", "indices": [ "articles", "cpuload" ], "state": "PARTIAL", "start_time": "2016-08-19T05:55:59.902Z", "start_time_in_millis": 1471586159902, "end_time": "2016-08-19T05:56:00.020Z", "end_time_in_millis": 1471586160020, "duration_in_millis": 118, "failures": [ { "index": "cpuload", "shard_id": 1, "reason": "RepositoryMissingException[[my_backup] missing]", "node_id": "TOiQIaLvS4iZihVSNrqkOA", "status": "INTERNAL_SERVER_ERROR" }, { "index": "articles", "shard_id": 0, "reason": "RepositoryMissingException[[my_backup] missing]", "node_id": "TOiQIaLvS4iZihVSNrqkOA", "status": "INTERNAL_SERVER_ERROR" }, { "index": "cpuload", "shard_id": 3, "reason": "RepositoryMissingException[[my_backup] missing]", "node_id": "TOiQIaLvS4iZihVSNrqkOA", "status": "INTERNAL_SERVER_ERROR" } ], "shards": { "total": 10, "failed": 3, "successful": 7 } } ] }

Anything in your logs?

This is in one node
[2016-08-19 11:00:45,412][INFO ][repositories ] [ravi-2] update repository [my_backup] [2016-08-19 11:26:00,019][INFO ][snapshots ] [ravi-2] snapshot [my_backup:snapshot_2] is done

And this is in another
[2016-08-19 11:25:59,951][WARN ][snapshots ] [ravi-1] [[articles][0]] [my_backup:snapshot_2] failed to create snapshot RepositoryMissingException[[my_backup] missing] at org.elasticsearch.repositories.RepositoriesService.indexShardRepository(RepositoriesService.java:347) at org.elasticsearch.snapshots.SnapshotShardsService.snapshot(SnapshotShardsService.java:322) at org.elasticsearch.snapshots.SnapshotShardsService.access$200(SnapshotShardsService.java:76) at org.elasticsearch.snapshots.SnapshotShardsService$1.doRun(SnapshotShardsService.java:296) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

All failed shards are belongs to this node only.

Is it a shared filesystem?

No. Its not. Is it necessary to share???

Yes - Snapshot And Restore | Elasticsearch Guide [2.3] | Elastic

The shared file system repository ("type": "fs") uses the shared file system to store snapshots. In order to register the shared file system repository it is necessary to mount the same shared filesystem to the same location on all master and data nodes. This location (or one of its parent directories) must be registered in the path.repo setting on all master and data nodes.