Reindex Existing data through ILM

Hello:

We are in the process of switching over to ILM (Index Lifecycle Management) and we're using Elasticsearch 6.6.1. One of the reasons why is that we want to control the size of our indices, and thus control our shard sizes so that they are optimal.

I got it working for new data and it's working as expected, but now I want to re-index all my existing indices (of which there are many) through ILM. I can't seem to figure out how to do this. I tried re-indexing one existing index to the rollover alias, but got an error :

Alias [alias name] has more than one indices associated with it [and then it lists all the indices ILM is currently managing], can't execute a single index op.

Anyone know how I would go about re-indexing my existing data through ILM?

Thanks

How did you set things up, eg are you using the default filebeat policy and template?

The cause of the error you posted is that the alias indicated in index.lifecycle.rollover_alias should have one index that is configured as the "write index" - that's the index that new data will be written to. Which index is the write alias will then be switched every time rollover happens. See the Index Alias docs for more information and an example of how to configure the write index for an alias.

However, could I ask why you're reindexing data to use ILM? ILM can be used on existing indices without any issue, with the caveat that if you already have some time-series indices (say, daily Filebeat indices), you should create a two policies: One for new indices and one for existing indices.

The two can look very similar, but a policy that's used for existing indices should omit the rollover action. This does mean keeping track of multiple policies, but if your policies have a delete phase configured, then only until the existing indices age out and are deleted - after all the pre-ILM indices are gone, you can remove the second policy.

1 Like

Hi Gordon:

So following the documentation, I created my Alias like so:

PUT <desktop-{now{YYYY.MM.dd@HH:mm|-03:00}}-1>
{
"aliases": {
"desktop_alias": {
"is_write_index": true
}
}
}

...where the desktop_alias is assigned to the initial index containing the date and time (plus the rollover number).

Here is the output of the GET _alias for those indices

{
"desktop-2019.05.03@11:17-000031" : {
"aliases" : {
"desktop_alias" : {
"is_write_index" : true
}
}
},
"desktop-2019.05.03@10:07-000030" : {
"aliases" : {
"desktop_alias" : {
"is_write_index" : false
}
}
}
}

So as you can see, the current (hot) index alias is writable, but the warm index isn't.

Did I still not do something right?

The reason we want to re-index the old data is that the sizes of the indexes have gotten out of control due to the addition of significantly more data over the past month. It used to be that I could keep shard sizes down using date math in logstash but that's not longer possible due to its limitations.

So adding ILM will allow me to keep shard sizes optimal for any new data that's coming in, but I would like to bring them into line for the old data as well.

It's an incredible amount of data, and we've run into performance problems in the past due to shard sizes - Elastic has advised us not to let it get out of control again.

Hi Mark:

So it is setup as follows:

The data comes into Elasticsearch via logstash
The output index set in the pipeline points to the alias which was setup for the initial index during ILM setup. So that the rollover alias was called desktop_alias, and the output of logstash is set to send to desktop_alias (as though it were an index).
In this case, there are no other template options set for that index pattern.

This is how ILM is intended to work: You direct all writes to an alias, desktop-alias in your case. When the write index for that alias meets the conditions for rollover defined in your policy, a new index is created and all writes to desktop-alias are directed to the new index. Using ILM with Rollover was generally designed to work best with new data.

The old index is still writable at this point, but only by writing to that index directly rather than the alias. (unless you've configured your policy to use the readonly, forcemerge, or shrink actions in the warm phase, which will cause it to be set to read-only by setting "index.blocks.write": true)

Let me see if I'm understanding correctly:
Your problem is that you have old data which has shard sizes that are too large (or too small, or generally the wrong size).

In order to correct this, you're trying to reindex the old data and use ILM so that it will be re-partitioned into new indices, which have same shard sizes as new data.

If that's the case, there are a few things to be aware of, and some alternatives.

For one, if you're reindexing into an index/alias that's managed by ILM and relying on ILM's rollover to determine when to create a new index, you should be aware that ILM checks the rollover conditions periodically - by default, every 10 minutes. A lot of indexing can happen in 10 minutes if you're ingesting at a very high rate (as is typical during a reindex), so your index sizes may be larger than you expect. You can configure the interval these checks happen with the indices.lifecycle.poll_interval setting, which may help with that problem, but setting it too low can cause additional load on the master node. I would recommend keeping it above 1m and setting it back to the default once you're done if you decide to want to keep going down this path.

For two, if you're using date math in your index names, as you appear to be, reindexing old data will result in those indices having data which has timestamps which do not match the name of the index. This won't cause any technical problems, but may be confusing. Similarly, if you're directing both old and new data into the same index, you'll have a mix of old and new data - which again, won't cause performance issues or anything, but may be confusing and cause difficulty with data retention (i.e. if you want to delete all data older than, say, 90 days, but that data is mixed in in the same index with data from 2 days ago, it's not as simple as just deleting the indices which contain the old data).

To correct shard sizes in the future, I recommend looking into the Shrink and Split APIs, although Split has some limitations for indices created in the 6.x series which may make it not useful for you for now. These APIs are much, much more efficient than reindexing.

If the Shrink/Split APIs aren't able to do what you want, you may also be able to reindex with a script similar to the example here to accomplish your goal with different tradeoffs.

Please correct me if I've misinterpreted you!

No you haven't misinterpreted me at all. You're bang on. And I did consider the problem of old data being indexed to an index name that represents a current date. Because it's only for a month's worth of data in this case, I figured "I'll live with it" as if we absolutely need to nail down a particular index for search purposes, I can always just find the index name by discovering the date of the data, and then any new data will be identifyable by the proper index date. Alternatively I could create a seperate profile for the old data and just name the indexes "blahblah-00001" and so on, which I might do just to avoid confusion.

I will look at the shrink and split APIs...thanks for that advise. If they work out I'll just use those.

Also I didn't realize ILM only checks every 10 minutes by default. Thanks - I'll change that and bring it down a bit, or set the maximum size so that the shard size will be closer to 48gig rather than 50 so it has a bit of a buffer. Putting too much stress on the master makes me a bit nervous.

I still don't get why I can't re-index to the rollover ailias using the re-index api though. Logstash sends its data to it and it correctly indexes only to the index which has the "is write index" : true condition. So what's causing the the re-index api to fail because it thinks it should be writing to all the indexes associated with the rollover alias? I mean if that's just the way it is, I'll settle for it..it just seems weird to me and makes me wonder if there's a setting somewhere that I'm missing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.