Empty snapshots

Hi,

I have a problem when trying to create and restore a snapshot in Elasticsearch. The snapshots are blank.

Some specifications:

  • I'm using a Mac (MacOS Monterey 12.6.2) if relevant. I'm running Elasticsearch in it and my idea is to generate snapshots in the same computer (for testing).
  • Elasticsearch 8.6.0
  • Kibana 8.6.0

I followed these instructions:

I configured the elasticsearch.yml file including the path.repo:/Users/my_backups

To register the snapshot repository I'm doing

curl -X PUT "localhost:9200/_snapshot/testing_backup?verify=true&pretty"
{
  "type": "fs",
  "settings": {
    "location": "testing_backup_location"
  }
}

# Receiving the response:
{
  "acknowledged" : true
}

To create the snapshot I'm using:

curl -X PUT "localhost:9200/_snapshot/testing_backup/testing_snapshot?wait_for_completion=true&pretty"
{
  "indices": "test_index"
}

Finally, when I checking the snapshots I get

curl -X GET localhost:9200/_snapshot/testing_backup/_current?pretty
{
  "snapshots" : [ ],
  "total" : 0,
  "remaining" : 0
}

# And
curl -X GET localhost:9200/_snapshot/_status?pretty
{
  "snapshots" : [ ]
}

I tried recovering an index from a created snapshot. Using Kibana I observed that the index was created along its mapping but there was no information (size 0B also).

Any insights will be very appreciated.
Best regards,

What is the response for that?

{
  "snapshot" : {
    "snapshot" : "testing_snapshot",
    "uuid" : "PsrjsuGtQ16SnIamdY9FsA",
    "repository" : "testing_backup",
    "version_id" : 8060099,
    "version" : "8.6.0",
    "indices" : [
      ".geoip_databases",
      ".kibana_8.6.0_001",
      ".apm-agent-configuration",
      "test_index",
      ".kibana_task_manager_8.6.0_001",
      ".apm-custom-link"
    ],
    "data_streams" : [ ],
    "include_global_state" : true,
    "state" : "SUCCESS",
    "start_time" : "2023-02-09T20:38:12.132Z",
    "start_time_in_millis" : 1675975092132,
    "end_time" : "2023-02-09T20:38:18.638Z",
    "end_time_in_millis" : 1675975098638,
    "duration_in_millis" : 6506,
    "failures" : [ ],
    "shards" : {
      "total" : 6,
      "failed" : 0,
      "successful" : 6
    },
    "feature_states" : [
      {
        "feature_name" : "geoip",
        "indices" : [
          ".geoip_databases"
        ]
      },
      {
        "feature_name" : "kibana",
        "indices" : [
          ".apm-custom-link",
          ".kibana_8.6.0_001",
          ".apm-agent-configuration",
          ".kibana_task_manager_8.6.0_001"
        ]
      }
    ]
  }
}

Can you also provide the output for GET _snapshot/testing_backup/testing_snapshot ?

Sure. Here it is

{
  "snapshots" : [
    {
      "snapshot" : "testing_snapshot",
      "uuid" : "PsrjsuGtQ16SnIamdY9FsA",
      "repository" : "testing_backup",
      "version_id" : 8060099,
      "version" : "8.6.0",
      "indices" : [
        ".geoip_databases",
        ".kibana_8.6.0_001",
        ".apm-agent-configuration",
        "test_index",
        ".kibana_task_manager_8.6.0_001",
        ".apm-custom-link"
      ],
      "data_streams" : [ ],
      "include_global_state" : true,
      "state" : "SUCCESS",
      "start_time" : "2023-02-09T20:38:12.132Z",
      "start_time_in_millis" : 1675975092132,
      "end_time" : "2023-02-09T20:38:18.638Z",
      "end_time_in_millis" : 1675975098638,
      "duration_in_millis" : 6506,
      "failures" : [ ],
      "shards" : {
        "total" : 6,
        "failed" : 0,
        "successful" : 6
      },
      "feature_states" : [
        {
          "feature_name" : "geoip",
          "indices" : [
            ".geoip_databases"
          ]
        },
        {
          "feature_name" : "kibana",
          "indices" : [
            ".apm-custom-link",
            ".kibana_8.6.0_001",
            ".apm-agent-configuration",
            ".kibana_task_manager_8.6.0_001"
          ]
        }
      ]
    }
  ],
  "total" : 1,
  "remaining" : 0
}

Ok, so the snapshot was successful and your test_index is present. When the snapshot was taken, did you have any documents stored in it ?
Can you please try

POST _snapshot/testing_backup/testing_snapshot/_restore
{
  "indices": "test_index"
}

followed by GET _cat/indices/test_index?v and provide the output ?

Yes, if before creating the snapshot I do

curl -X GET "localhost:9200/_cat/indices/test_index?v&pretty"

# I receive 
health status index       uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   test_index B-j4fJQCR5qYafsh01yaQg   1   1    2010722            0    183.9mb        183.9mb

Mmm, but this will try to restore the test_index, which already exists. Is it OK? In my previous attempts I tried deleting the index and then restoring it, as described in Restore the snapshot

Ahh right, may be try like this then:

POST _snapshot/testing_backup/testing_snapshot/_restore
{
  "indices": "test_index",
  "rename_pattern": "(.+)",
  "rename_replacement": "restored-$1"
}

followed by GET _cat/indices/restored*?v

GET _snapshot/$REPO/_current only shows the currently-running snapshots, so it will return an empty list if the snapshot completed. You want GET _snapshot/$REPO/_all.

Similarly, GET _snapshot/_status only returns the status of currently-running snapshots.

1 Like

I got

{"accepted":true}

I got

health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
red    open   restored-test_index 16WPI_uGQRavBQ8VZqV0sQ   1   1  

I got the same information that I received using

GET _snapshot/testing_backup/testing_snapshot
1 Like

OK, wait for some time, the index needs to be green first before data can be restored to it. The restoration can take some time since restoration speed will depend on your network bandwidth and repository configuration.

test_index, the index that I'm trying to get the snapshot from, is yellow. Is it relevant?

@DavidTurner I guess this shouldn't be an issue right ? Essentially the primary shard is available to take snapshot from.
@tmslara.a is your restored-test_index also yellow or is it still red ? If red, can you please check GET _cat/shards/restored*?v&h=name,status and GET _cluster/allocation/explain to get details on why the shard is not being initialized ?

This is empty, I don't receive anything back.

I got

{
  "note" : "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
  "index" : "restored-test_index",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NEW_INDEX_RESTORED",
    "at" : "2023-02-10T17:19:11.263Z",
    "details" : "restore_source[testing_backup/testing_snapshot]",
    "last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  "node_allocation_decisions" : [
    {
      "node_id" : "LWcDogKwQ7u0GPRogQHUEA",
      "node_name" : "Tomass-MacBook-Pro.local",
      "transport_address" : "127.0.0.1:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8589934592",
        "xpack.installed" : "true",
        "ml.allocated_processors_double" : "4.0",
        "ml.max_jvm_size" : "4294967296",
        "ml.allocated_processors" : "4"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "restore_in_progress",
          "decision" : "NO",
          "explanation" : "shard has failed to be restored from the snapshot [testing_backup:testing_snapshot/iU6x5kj5S9CgkrBLcO6Plw] - manually close or delete the index [restored-test_index] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard. Details: [restore_source[testing_backup/testing_snapshot]]"
        }
      ]
    }
  ]
}

I think the yellow status is normal since I'm doing everything in a single node (my PC).

Guess we need to wait for this task to be completed first.

Everything still the same

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.