Is it possible to restore a single backing index for a data stream

we somehow lost both the a shard and its replica for the head of our datastream .
I forced a rollover to get the datastream to accepting data again and removed the empty index (using api).

I now want to restore that backing index from a snapshot but restore fails saying that the index still exists.

Could you post the api call/restore details and the error message?

I was working from kibana so i don't have the details. Will look at doing it from the api.

Hi @Russell_Fulton You got me thinnking about this and the answer is Yes!

So this is what I did ... I had a data stream Istio Logs

My Case I deleted this backing index
.ds-logs-istio.access_logs-default-2023.08.27-000453

Then set it as a "custom" index pattern in the restore UI

Then Restored it

At this point the backing index is restored... but I could not see the data in places...
BTW it can take a while to restore etc.. etc...

@Russell_Fulton Now here is the secret Sauce ... even though you restored the index you need to add it back to the data stream in my case you need to use the Modify Data Stream API

POST _data_stream/_modify
{
  "actions": [
    {
      "add_backing_index": {
        "data_stream": "logs-istio.access_logs-default",
        "index": ".ds-logs-istio.access_logs-default-2023.08.27-000453"
      }
    }
  ]
}

I did not see a way to do this step via the API.
Then it was all beautiful!

2 Likes

Thanks Stephen!

I went back and verified what happened when I tried to restore and got an message:

I remember getting that when I first tried and the screwed up index was still there. I then deleted the (empty) index and now when I try to do the restore:

Screenshot 2023-08-30 at 4.13.39 PM

This is where I got to before... listing the indexes in the snapshot shows:

Screenshot 2023-08-30 at 4.15.47 PM

and I find that the empty index is back. There are some failed shards but this index was not listed as affected. I then went back to a snapshot with no failed shards got the same result.

the index created looks like this:

I am guessing this is created before the restore starts.

I went back to the API to see what I could find out about the snapshot but got a 404 when I did a GET on the specific snapshot. I then tried to list the whole repository:

elasticsearch@secesprd02:~$ curl -X GET  -u elastic --noproxy \*  "https://es2.insec.auckland.ac.nz:9200/_snapshot/daily/*/_status?pretty" -H 'Content-Type: application/json' 
Enter host password for user 'elastic':
{
  "error" : {
    "root_cause" : [
      {
        "type" : "snapshot_missing_exception",
        "reason" : "[daily:*] is missing"
      }
    ],
    "type" : "snapshot_missing_exception",
    "reason" : "[daily:*] is missing"
  },
  "status" : 404
}

Sigh...

First, is the index actually there in the snapshot? Are you sure you are not restoring and empty index.

I got the same no restored snapshots but it was really working...

2nd Did you wait? it can take a while.... it can look like an empty index for a while
The index appears as that for a while especially if it is large I noticed the same things it took several minutes because it is loading/restoring and has not refreshed.

Did you run this to see if the index is there
GET _cat/indices/.ds-sec*?v

you can also refresh the index

POST /my-index-000001/_refresh

Not sure what is up with your snapshot you need a good snapshot..

Not sure about the API not sure where you got the _status path from I don't see that in the API.

And your repository is named daily?

what happens if you just run

GET _snapshot/*/*

1 Like

got the same no restored snapshots but it was really working...

Ah! that is so confusing! sigh...

Some indication that there is a restoration in progress would be nice : )

Thanks, it turns out that that page does not show anything until at least one snapshot has been fully restored.

Added the magic source and now Kibana shows just one days data missing instead of ten.

Thanks again @stephenb for your expertise and patience!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.