Restore operation can only be performed on snapshots with state "SUCCESS", or "PARTIAL" if partial=True

I am getting the above error with curator.

> Restore operation can only be performed on snapshots with state "SUCCESS", or "PARTIAL" if partial=True.

My action file is the below

 actions:
   1:
     action: restore
     description: >-
       Restore all indices in the most recent snapshot with state SUCCESS.  Wait
       for the restore to complete before continuing.  Do not skip the repository
       filesystem access check.  Use the other options to define the index/shard
       settings for the restore.
     options:
       repository: testbackup
       # If name is blank, the most recent snapshot by age will be selected
       name:
       # If indices is blank, all indices in the snapshot will be restored
       indices:
       extra_settings:
         index_settings:
           number_of_replicas: 0
       wait_for_completion: True
       max_wait: 3600
       wait_interval: 10
     filters:
     - filtertype: state
       state: SUCCESS
       exclude: True

and my snapshots look as

elastic@fantastic:/opt/elasticsearch-curator# curl -XGET 192.168.1.12:9200/_snapshot/testbackup/_all?pretty
{
"snapshots" : [
{
"snapshot" : "curator-20170812180638",
"uuid" : "BoJ5pfwqQHmrQlVNHRl2qQ",
"version_id" : 5040099,
"version" : "5.4.0",
"indices" : [
...
...
]
],
"shards" : {
"total" : 995,
"failed" : 380,
"successful" : 615
}
}
]
}

When I am switching the exclude to False such as

  • filtertype: state
    state: SUCCESS
    exclude: False

I am getting back...

2017-08-12 19:37:29,956 DEBUG curator.snapshotlist iterate_filters:479 Parsed filter args: {'filtertype': 'state', 'state': 'SUCCESS', 'exclude': False}
2017-08-12 19:37:29,956 DEBUG curator.utils iterate_filters:488 Filter args: {'state': 'SUCCESS', 'exclude': False}
2017-08-12 19:37:29,956 DEBUG curator.utils iterate_filters:489 Pre-instance: ['curator-20170812180638']
2017-08-12 19:37:29,956 DEBUG curator.snapshotlist filter_by_state:319 Filter by state: Snapshot: curator-20170812180638
2017-08-12 19:37:29,956 DEBUG curator.snapshotlist __not_actionable:51 Snapshot curator-20170812180638 is not actionable, removing from list.
2017-08-12 19:37:29,956 DEBUG curator.utils iterate_filters:491 Post-instance:
2017-08-12 19:37:29,956 ERROR curator.cli cli:187 Unable to complete action "restore". No actionable items in list: <class 'curator.exceptions.NoSnapshots'>

Could someone shed some light? I know that some shards have failed when I snapshotted however is that the case why the restore doesnt work? If thats the problem is there a way to ignore the failed shards and restore the rest?

Thanks in advance

Update1: I also tried the partial: True as below and the same error is coming up

actions:
1:
action: restore
description: >-
Restore all indices in the most recent snapshot with state SUCCESS. Wait
for the restore to complete before continuing. Do not skip the repository
filesystem access check. Use the other options to define the index/shard
settings for the restore.
options:
repository: testbackup
# If name is blank, the most recent snapshot by age will be selected
name:
partial: True
# If indices is blank, all indices in the snapshot will be restored
indices:
extra_settings:
index_settings:
number_of_replicas: 0
wait_for_completion: True
max_wait: 3600
wait_interval: 10
filters:
- filtertype: state
state: SUCCESS
exclude: True

This suggests you want to exclude snapshots which were SUCCESSful. Is that the case?

I guess you are right - I dont want to do that :slight_smile: . However when I am changing the

state: Failed

I am getting the following

2017-08-12 19:54:31,176 ERROR curator.validators.SchemaCheck result:64 Schema error: not a valid value for dictionary value @ data['state']
2017-08-12 19:54:31,176 ERROR curator.validators.SchemaCheck result:64 Schema error: Configuration: filter: Location: Action ID "1", action "restore", filter #0: {'filtertype': 'state', 'state': 'Failed', 'exclude': True}: Bad Value: "Failed", not a valid value for dictionary value @ data['state']. Check configuration file.
Configuration: filters: Location: Action ID "1", action "restore", "filters": Bad Value: "None", Configuration: filter: Location: Action ID "1", action "restore", filter #0: {'filtertype': 'state', 'state': 'Failed', 'exclude': True}: Bad Value: "Failed", not a valid value for dictionary value @ data['state']. Check configuration file.. Check configuration file.

Try running curator_cli with show_snapshots:

curator_cli show_snapshots --repository MY_REPOSITORY_NAME
daily-20170810090056
daily-20170811085005
daily-20170812090140
daily-20170813085425
daily-20170814090211
daily-20170815085030
daily-20170816085625
daily-20170817083010
...

I guess thats fine.

elastic@fantastic:/opt/elasticsearch-curator# curator_cli --host 192.168.1.12 show_snapshots --repository testbackup
curator-20170812180638

States must be all-caps:

def state(**kwargs):
    # This setting is only used with the state filtertype.
    return { Optional('state', default='SUCCESS'): Any(
        'SUCCESS', 'PARTIAL', 'FAILED', 'IN_PROGRESS') }
1 Like

You can also apply filters:

curator_cli show_snapshots --repository MY_REPOSITORY_NAME --filter_list '{"filtertype":"state","state":"SUCCESS"}'

Still it doesnt love it... Changing it to FAILED gives...

2017-08-12 20:00:06,608 DEBUG curator.snapshotlist iterate_filters:479 Parsed filter args: {'filtertype': 'state', 'state': 'FAILED', 'exclude': True}
2017-08-12 20:00:06,608 DEBUG curator.utils iterate_filters:488 Filter args: {'state': 'FAILED', 'exclude': True}
2017-08-12 20:00:06,608 DEBUG curator.utils iterate_filters:489 Pre-instance: ['curator-20170812180638']
2017-08-12 20:00:06,608 DEBUG curator.snapshotlist filter_by_state:319 Filter by state: Snapshot: curator-20170812180638
2017-08-12 20:00:06,609 DEBUG curator.snapshotlist __not_actionable:51 Snapshot curator-20170812180638 is not actionable, removing from list.
2017-08-12 20:00:06,609 DEBUG curator.utils iterate_filters:491 Post-instance:
2017-08-12 20:00:06,609 ERROR curator.cli cli:187 Unable to complete action "restore". No actionable items in list: <class 'curator.exceptions.NoSnapshots'>

You've told Curator to exclude "FAILED", but you don't know what state it is in. It doesn't seem to be in state "SUCCESS" or "FAILED". That leaves "PARTIAL" or "IN_PROGRESS".

You should drop the exclude line altogether, and have it only restore SUCCESS. Restoring a PARTIAL is something I'd only do in emergency. And you can't restore from one in state FAILED or IN_PROGRESS. As a matter of fact, you can't restore while another snapshot is IN_PROGRESS anyway.

Still it doesnt work.

fantastic:/opt/elasticsearch-curator# cat actionrestore.yaml
actions:
1:
action: restore
description: >-
Restore all indices in the most recent snapshot with state SUCCESS. Wait
for the restore to complete before continuing. Do not skip the repository
filesystem access check. Use the other options to define the index/shard
settings for the restore.
options:
repository: testbackup
# If name is blank, the most recent snapshot by age will be selected
name:
#partial: True
# If indices is blank, all indices in the snapshot will be restored
indices:
extra_settings:
index_settings:
number_of_replicas: 0
wait_for_completion: True
max_wait: 3600
wait_interval: 10
filters:
- filtertype: state
state: SUCCESS

And the error msg is:

2017-08-13 12:00:40,534 DEBUG curator.validators.SchemaCheck init:27 "filter" config: {'filtertype': 'state', 'state': 'SUCCESS', 'exclude': False}
2017-08-13 12:00:40,534 DEBUG curator.snapshotlist iterate_filters:479 Parsed filter args: {'filtertype': 'state', 'state': 'SUCCESS', 'exclude': False}
2017-08-13 12:00:40,534 DEBUG curator.utils iterate_filters:488 Filter args: {'state': 'SUCCESS', 'exclude': False}
2017-08-13 12:00:40,534 DEBUG curator.utils iterate_filters:489 Pre-instance: ['curator-20170812180638']
2017-08-13 12:00:40,535 DEBUG curator.snapshotlist filter_by_state:319 Filter by state: Snapshot: curator-20170812180638
2017-08-13 12:00:40,535 DEBUG curator.snapshotlist __not_actionable:51 Snapshot curator-20170812180638 is not actionable, removing from list.
2017-08-13 12:00:40,535 DEBUG curator.utils iterate_filters:491 Post-instance:
2017-08-13 12:00:40,535 ERROR curator.cli cli:187 Unable to complete action "restore". No actionable items in list: <class 'curator.exceptions.NoSnapshots'>

What do you see if you run:

curl -XGET 'http://192.168.1.12:9200/_snapshot/testbackup/curator-20170812180638?pretty'

{
"snapshots" : [
{
"snapshot" : "curator-20170812180638",
"uuid" : "BoJ5pfwqQHmrQlVNHRl2qQ",
"version_id" : 5040099,
"version" : "5.4.0",
"indices" : [
"logstash-test-2017.04.19",
"logstash-test-2017.04.15",
"logstash-test-2017.06.07",
"logstash-test-2017.06.15",
"logstash-test-2017.02.09",
"logstash-test-2017.04.22",
"logstash-test-2017.05.08",
"logstash-test-2017.01.09",
"logstash-test-2017.03.09",
"logstash-test-2017.05.28",
"logstash-test-2017.07.27",
"logstash-test-2017.05.06",
"logstash-test-2017.01.05",
"logstash-test-2017.04.21",
"logstash-test-2017.06.14",
"logstash-test-2017.03.29",
"logstash-test-2017.05.05",
"logstash-test-2017.07.09",

...
keeps going...

"logstash-test-2017.07.18",
"logstash-test-2017.02.08",
"logstash-test-2017.06.27",
"logstash-test-2017.07.20",
"logstash-test-2017.02.12",
"logstash-test-2017.01.29",
"logstash-test-2017.02.24",
"logstash-test-2017.06.16"
],
"state" : "FAILED",
"reason" : "Indices don't have primary shards [logstash-test-2016.12.29, logstash-test-2017.03.01, logstash-test-2017.04.13, logstash-test-2017.01.2
0, logstash-test-2017.03.02, logstash-test-2016.12.28, logstash-test-2017.04.11, logstash-test-2017.01.22, logstash-test-2017.04.12, logstash-test-2017.01
.24, logstash-test-2017.01.23, logstash-test-2017.01.26, logstash-test-2017.03.03, logstash-test-2017.03.09, logstash-test-2017.01.28, logstash-test-2017.
01.27, logstash-test-2016.12.23, logstash-test-2017.01.29, logstash-test-2016.12.22, logstash-test-2016.12.24, logstash-test-2016.12.27, logstash-test-201
6.12.26, logstash-test-2017.04.20, logstash-test-2017.04.21, logstash-test-2017.01.31, logstash-test-2017.03.13, logstash-test-2017.04.25, logstash-test-2
017.03.10, logstash-test-2017.02.01, logstash-test-2017.04.22, logstash-test-2017.02.03, logstash-test-2017.04.28, logstash-test-2017.03.17, logstash-test
-2017.03.14, logstash-test-2017.04.26, logstash-test-2017.03.15, logstash-test-2017.02.04, logstash-test-2017.03.18, logstash-test-2017.02.09, logstash-te
st-2017.03.19, logstash-test-2017.03.23, logstash-test-2017.03.24, logstash-test-2017.05.02, logstash-test-2017.03.21, logstash-test-2017.03.22, logstash-
test-2017.01.02, logstash-test-2017.01.01, logstash-test-2017.05.07, logstash-test-2017.03.28, logstash-test-2017.01.04, logstash-test-2017.03.25, logstas
h-test-2017.01.03, logstash-test-2017.01.06, logstash-test-2017.01.05, logstash-test-2017.01.08, logstash-test-2017.03.29, logstash-test-2017.01.07, logst
ash-test-2016.12.30, logstash-test-2017.03.30, logstash-test-2017.03.31, logstash-test-2017.04.02, logstash-test-2017.04.03, logstash-test-2017.01.10, log
stash-test-2017.04.01, logstash-test-2017.01.13, logstash-test-2017.04.06, logstash-test-2017.01.12, logstash-test-2017.04.04, logstash-test-2017.01.15, l
ogstash-test-2017.01.14, logstash-test-2017.04.05, logstash-test-2017.01.17, logstash-test-2017.01.16, logstash-test-2017.04.08, logstash-test-2017.01.19,
logstash-test-2017.01.18, logstash-test-2017.04.09]",
"start_time" : "2017-08-12T18:06:55.791Z",
"start_time_in_millis" : 1502561215791,
"end_time" : "2017-08-12T18:06:59.708Z",
"end_time_in_millis" : 1502561219708,
"duration_in_millis" : 3917,
"failures" : [
{
"index" : "logstash-test-2017.05.02",
"index_uuid" : "logstash-test-2017.05.02",
"shard_id" : 0,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"
},
{
"index" : "logstash-test-2017.03.23",
"index_uuid" : "logstash-test-2017.03.23",
"shard_id" : 4,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"
},
{
"index" : "logstash-test-2017.03.24",
"index_uuid" : "logstash-test-2017.03.24",
"shard_id" : 0,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"
},
{
"index" : "logstash-test-2017.01.16",
"index_uuid" : "logstash-test-2017.01.16",
"shard_id" : 0,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"
},
{
"index" : "logstash-test-2017.01.27",
"index_uuid" : "logstash-test-2017.01.27",
"shard_id" : 3,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"
},
{
"index" : "logstash-test-2017.03.31",
"index_uuid" : "logstash-test-2017.03.31",
"shard_id" : 3,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"
},
{
"index" : "logstash-test-2017.03.14",
"index_uuid" : "logstash-test-2017.03.14",
"shard_id" : 1,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"
},
{
"index" : "logstash-test-2017.03.23",
"index_uuid" : "logstash-test-2017.03.23",
"shard_id" : 0,
"reason" : "primary shard is not allocated",
"status" : "INTERNAL_SERVER_ERROR"

keeps going...

      "index" : "logstash-test-2017.03.01",
      "index_uuid" : "logstash-test-2017.03.01",
      "shard_id" : 3,
      "reason" : "primary shard is not allocated",
      "status" : "INTERNAL_SERVER_ERROR"
    }
  ],
  "shards" : {
    "total" : 995,
    "failed" : 380,
    "successful" : 615
  }
}

]
}

And that's why you can't restore anything from the snapshot. It never completed successfully. The reason is also listed:

“reason” : “Indices don’t have primary shards [logstash-test-2016.12.29, logstash-test-2017.03.01, logstash-test-2017.04.13, logstash-test-2017.01.2
0, logstash-test-2017.03.02, logstash-test-2016.12.28, logstash-test-2017.04.11, logstash-test-2017.01.22, logstash-test-2017.04.12, logstash-test-2017.01
.24, logstash-test-2017.01.23, logstash-test-2017.01.26, logstash-test-2017.03.03, logstash-test-2017.03.09, logstash-test-2017.01.28, logstash-test-2017.
01.27, logstash-test-2016.12.23, logstash-test-2017.01.29, logstash-test-2016.12.22, logstash-test-2016.12.24, logstash-test-2016.12.27, logstash-test-201
6.12.26, logstash-test-2017.04.20, logstash-test-2017.04.21, logstash-test-2017.01.31, logstash-test-2017.03.13, logstash-test-2017.04.25, logstash-test-2
017.03.10, logstash-test-2017.02.01, logstash-test-2017.04.22, logstash-test-2017.02.03, logstash-test-2017.04.28, logstash-test-2017.03.17, logstash-test
-2017.03.14, logstash-test-2017.04.26, logstash-test-2017.03.15, logstash-test-2017.02.04, logstash-test-2017.03.18, logstash-test-2017.02.09, logstash-te
st-2017.03.19, logstash-test-2017.03.23, logstash-test-2017.03.24, logstash-test-2017.05.02, logstash-test-2017.03.21, logstash-test-2017.03.22, logstash-
test-2017.01.02, logstash-test-2017.01.01, logstash-test-2017.05.07, logstash-test-2017.03.28, logstash-test-2017.01.04, logstash-test-2017.03.25, logstas
h-test-2017.01.03, logstash-test-2017.01.06, logstash-test-2017.01.05, logstash-test-2017.01.08, logstash-test-2017.03.29, logstash-test-2017.01.07, logst
ash-test-2016.12.30, logstash-test-2017.03.30, logstash-test-2017.03.31, logstash-test-2017.04.02, logstash-test-2017.04.03, logstash-test-2017.01.10, log
stash-test-2017.04.01, logstash-test-2017.01.13, logstash-test-2017.04.06, logstash-test-2017.01.12, logstash-test-2017.04.04, logstash-test-2017.01.15, l
ogstash-test-2017.01.14, logstash-test-2017.04.05, logstash-test-2017.01.17, logstash-test-2017.01.16, logstash-test-2017.04.08, logstash-test-2017.01.19,
logstash-test-2017.01.18, logstash-test-2017.04.09]”,

Is there a way to fix that ? Or to tell it to skip these?

Those indices have no primary shards in the snapshot. You can't restore what isn't there. The whole snapshot FAILED. You should probably delete that snapshot and try again. I'd also check to make sure that ignore_unavailable is set to True in your snapshot configuration.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.