Swapping alias workflow

Hi,

I'm looking for some advice about swapping what an alias is pointing at.

We're using logstash to index events into ES, in the good ole format of logstash-metrics-YYYY-MM-dd.

Now, we've discovered that we need to re-ingest a whole bunch of logs from more than a month ago to now (just realised we had a bug with one of the existing logstash transforms).

Normally we'd just wipe out all the existing indices, and then reindex over them. BUT. I'm trying to explore if there's a less destructive manner of doing this by using aliases.

I'm aware that:

  • aliases are a point in time towards a wildcard set of indices. i.e. an index of logstash-2021-01-01 and logstash-2021-01-02 can be set up for an alias by having the alias point at logstash-*. But then when a new logstash-2021-01-03 index is created, the index will NOT be added to the alias.
  • there is a way in the LS index template to set up the indices to be added to aliases

What I'm hoping for some guidance on, is what the workflow could be.

Here's what I'm currently thinking:

  • create an alias to the current set of indices, and update the existing index template to keep adding indices to that alias - make sure all our other software points at the new alias.
  • separately, have a new LS instance that will ingest all the older logs into a new set of indices with a different name format - this one will currently NOT have an index template with an alias set up
  • once the new one has all been ingested, stop both LS instances
  • then swap the alias to the newer index name format, check that it works.
  • if it works, then redeploy the newer LS instance, and this time with the index template set up to point at the alias
  • delete all the old indices

does that workflow seem like it'd make sense and work?

thank you in advance!

That seems pretty sane.

If you're reindexing data, why not consider [ILM]ILM: Manage the index lifecycle | Elasticsearch Reference [7.10] | Elastic)?

Short timelines, no prior knowledge unfortunately.

I'll definitely have a look at it for the next time we need to do something like that.

Thank you for the suggestion as well as the feedback!