ILM trying to rollover old index

Hello, I have a single-node ES setup that's receiving data from Filebeat running on a separate server.

Since I have Filebeat shipping data from different log sources, I went through the process of manually creating an index template and ILM policy (and then bootstrapping the index) in order to send the data from each source to a separate index.

Over time, after updating Filebeat and Elasticsearch on both the client and ES server side, I've begun seeing a long series of error messages in the Elasticsearch index management view and the elasticsearch log itself. This is an example from the log output:

[2020-11-24T11:51:01,353][ERROR][o.e.x.i.IndexLifecycleRunner] [monitor-elk] policy [lb-web-dataapi] for index [lb-web-dataapi-7.9.0-000002] failed on step [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]. Moving to ERROR step
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [lb-web-dataapi-7.9.2] does not point to index [lb-web-dataapi-7.9.0-000002]
        at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition( [x-pack-core-7.9.3.jar:7.9.3]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep( [x-pack-ilm-7.9.3.jar:7.9.3]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies( [x-pack-ilm-7.9.3.jar:7.9.3]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered( [x-pack-ilm-7.9.3.jar:7.9.3]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners( [x-pack-core-7.9.3.jar:7.9.3]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ [x-pack-core-7.9.3.jar:7.9.3]
        at java.util.concurrent.Executors$ [?:?]
        at [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]
        at java.util.concurrent.ThreadPoolExecutor$ [?:?]
        at [?:?]

Clearly, there is a mismatch between the rollover alias and the index name, that's apparently causing the rollover action to fail. However, the weird part is I don't know where Elasticsearch is getting the old index name "lb-web-dataapi-7.9.0-000002" from, as I've since updated everything to 7.9.3 (on the ES server) and 7.9.2 (on the Filebeat client). In fact, at the same time as these errors are occurring, Elasticsearch is continuing to rollover the newer versions of the index just fine (well it appears to be at least).

Does anyone know how I can find out why ILM is stuck on this old index?

That'd might depend on what your Filebeat config looks like, can you share it?

Thanks for the response. I think I figured out the issue actually...

Basically, when I updated to a newer version of Filebeat, I went through the manual process of updating the index template and bootstrapping a new index again in order to incorporate the more recent version number in the index name and alias. While doing that did in fact work and allowed the newer index versions to receive data and rollover smoothly, I forgot about the older index that I bootstrapped previously (the 7.9.0 index), which was still tied to its ILM policy and the [now modified] index template. So, I'm guessing that the rollover disagreement reported in the elasticsearch log was arising from the disagreement between the old index and the new template version. That and the fact that I never removed the ILM policy to stop it from trying to rollover.

All in all, I believe I resolved the issue by removing the ILM policy from the old index that was trying to rollover and giving errors.

Just to clarify for you and other users though, the faulting index was completely empty (it didn't contain any documents) because I had configured Filebeat to ship data only to the new index version after updating. Only the predecessors of the old index version contained data. So, it was safe for me to remove the ILM policy from an empty index without worrying about the retention policy of the old data from being discontinued.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.