I recently encountered a challenge involving the migration of an entire cluster to a different version of Elasticsearch, specifically one that includes archive indexes.
A few days ago, I posted a query seeking assistance on this matter and received valuable feedback. Unfortunately, due to constraints with our customer's licensing (Platinum to Enterprise), we are unable to perform a direct snapshot and restore from Elasticsearch 6.x to 8.12. Consequently, I am required to execute the migration in two steps: first from version 6 to 7, and then from 7 to 8.
The production environment consists of approximately 1500 indexes, each ranging in size from 2 to 4GB, with data organized by day and month.
I am seeking advice on best practices for handling this scenario. How do seasoned professionals approach such operations? Is scripting commonly employed to, for instance, snapshot an entire month at a time and then restore it?
I am keen to hear your insights on managing this data migration with a considerable number of indexes. Your expertise and experiences will be highly valuable in guiding me through this process.
You could also use Logstash to do the same thing with the Logstash Elasticsearch input and output filters. We use Logstash a lot for these types of operations.
Already made a few tests but I'm having issues regarding network communications. Not related with Elastic. Some ports need to be opened between Elastic 6 and 8.
I'm a bit concern regarding network throughput and read/write speeds... since we have around 1000 indexes with 2 to 4GB each...
In terms of operations this (reindex command) wouldn't be done one by one right?
1500 indexes with this size, would give you something close to 6 TB.
Do your index have some naming pattern? You coud reduce the number of indices in your destination cluster.
Like, reindex daily indices from january into a monthly index.
For example, if you have indices named like this:
appname-2023.01.01
appname-2023.01.02
...
appname-2023.01.31
You could set your destination index to be just appname-2023.01, in the source for the reindex command you just use appname-2023.01.* and every daily index will be reindex on the same destination index.
One other thing that you may wish to look at in the future is using ILM.
You seem to have lots of small indexes and by using ILM it provides you with an efficient shard size. It also allows you to set retention periods in Kibana for index housekeeping
We migrated all of our indexes to them a few months back. We still do have daily indexes and as these have the same index pattern, both can be queried at the same time.
We are just letting the daily indexes drop off based on their retention.
I don't think this works, on the reindex you can use the wildcard on the source index, to say that every index that matches the pattern will be reindex in the same destination, but if you want to reindex and keep the same name, you will need to do one by one.
It depends on how the index are being queried and written.
Also, having multiple small daily indices is not optimal, there is a recommendation to keep the shard size between 40 ~ 50 GB, this is hard to achieve using daily indices, the best approach is to change the indexing strategy to use an index managed by ILM that will rolover when it reaches the specified size or age.
But this will probably require changes on your side, on how you are indexing and reading the data.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.