Recommended Approach to address index pattern

Dear Experts,

I have a legacy index created a few years ago before I better understood index management and looking for a recommendation to address the problem.

I am looking at the least impactful way/time intensive way to add the new pattern, update all existing visualizations and also reindex the data

Right now its one fat index with no partition strategy e.g. no YYYY-MM-DD. The index name is like bigfatstupidindex :slight_smile:

On average I get around 10 Million events in the index per month and thinking of moving to indexname-YYYY.MM

the current stats of my fat index:

  • 74GB storage size - 37GB primary
  • 76 Million Documents
  • 22 Million Deletes Documents

My understanding the way i'd have to address this:

  • Create the index pattern
  • Update my logstash ingestion pipeline
  • Use re-index API (want to confirm?) to put into new indexes by indexname-YYYY.MM
  • Find all applicable visualizations and update :frowning:

Any suggestions welcome.


It's not a huge index. I would look at ILM and set things up right from the start and then reindex into that.

You will need to alter the index pattern for the dashboards, but if you export them and do a search/replace, you can minimise too much work.

@warkolm - thank you for your reply over weekend.

Unfortunately my cluster is on 6.6.0 and in Elastic Cloud the ILM capability is only available with latest releases there.

Also, when i use the saved object feature and look for relationships it only shows 2 - when i know there is at least 28 when i search quickly on that index name for visualizations. My concerns are missing something :frowning:

Any options considering these limitations?


If you are using the Elasticsearch Service, then you can upgrade extremely simply.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.