Inconsistent backup status between _all and _status view

Hey all,

in ES 1.3.2, on a 3 nodes cluster, it seems that when I get the _all list
of snapshots (/_snapshot/my_backup/_all), they are all successful:

[...]
"1786_v3_2014-10-23"
],
"shards": {
"failed": 0,
"successful": 2035,
"total": 2035
},
"snapshot": "backup-dev-20141110-152900",
"start_time": "2014-11-10T23:29:43.436Z",
"start_time_in_millis": 1415662183436,
"state": "SUCCESS"
}
]
}

However, when asking for the _status of the specific label:

root@svc2.dev.domain.local # curl -s -XGET
http://localhost:9200/_snapshot/my_backup/backup-dev-20141110-152900/_status
{"error":"RemoteTransportException[[Hannibal
King][inet[/192.168.221.33:9300]][cluster/snapshot/status]]; nested:
IndexShardRestoreFailedException[[3_v3_2014-11-03][0] failed to read shard
snapshot file]; nested:
FileNotFoundException[/var/vmware/backup/indices/3_v3_2014-11-03/0/snapshot-backup-dev-20141110-152900
(No such file or directory)]; ","status":500}

Is this expected (maybe status digs deeper and checks for files presence
before returning success) or should I open a bug? Thanks in advance...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/95193521-d068-4a09-9f18-9ffa24c118b4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.