ILM Policy Error

I am using ELK version 8.12.0 for collecting kong logs using kong udp-logs.

I am getting the below ILM error for one old index. How to solve it, please?

[2024-08-07T22:14:31,004][WARN ][o.e.x.i.ExecuteStepsUpdateTask] [elk03.example.com] policy [kong-index-policy] for index [kong-2022-11-17-000001] failed on cluster state step [{"phase":"hot","action":"rollover","name":"update-rollover-lifecycle-date"}]. Moving to ERROR step
java.lang.IllegalStateException: no rollover info found for [kong-2022-11-17-000001] with rollover target [kong], the index has not yet rolled over with that target
        at org.elasticsearch.xpack.core.ilm.UpdateRolloverLifecycleDateStep.performAction(UpdateRolloverLifecycleDateStep.java:60) ~[?:?]
        at org.elasticsearch.xpack.ilm.ExecuteStepsUpdateTask.doExecute(ExecuteStepsUpdateTask.java:113) ~[?:?]
        at org.elasticsearch.xpack.ilm.IndexLifecycleClusterStateUpdateTask.execute(IndexLifecycleClusterStateUpdateTask.java:47) ~[?:?]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner$1.execute(IndexLifecycleRunner.java:70) ~[?:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1039) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1004) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:232) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1626) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:386) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1623) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1237) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:386) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1216) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.12.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1583) ~[?:?]
[2024-08-07T22:24:30,254][INFO ][o.e.x.i.IndexLifecycleRunner] [elk03.example.com] policy [kong-index-policy] for index [kong-2022-11-17-000001] on an error step due to a transient error, moving back to the failed step [update-rollover-lifecycle-date] for execution. retry attempt [1900]
[2024-08-07T22:24:30,287][WARN ][o.e.x.i.ExecuteStepsUpdateTask] [elk03.example.com] policy [kong-index-policy] for index [kong-2022-11-17-000001] failed on cluster state step [{"phase":"hot","action":"rollover","name":"update-rollover-lifecycle-date"}]. Moving to ERROR step
java.lang.IllegalStateException: no rollover info found for [kong-2022-11-17-000001] with rollover target [kong], the index has not yet rolled over with that target
        at org.elasticsearch.xpack.core.ilm.UpdateRolloverLifecycleDateStep.performAction(UpdateRolloverLifecycleDateStep.java:60) ~[?:?]
        at org.elasticsearch.xpack.ilm.ExecuteStepsUpdateTask.doExecute(ExecuteStepsUpdateTask.java:113) ~[?:?]
        at org.elasticsearch.xpack.ilm.IndexLifecycleClusterStateUpdateTask.execute(IndexLifecycleClusterStateUpdateTask.java:47) ~[?:?]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner$1.execute(IndexLifecycleRunner.java:70) ~[?:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1039) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1004) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:232) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1626) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:386) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1623) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1237) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:386) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1216) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983) ~[elasticsearch-8.12.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.12.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1583) ~[?:?]

Hi @linux_admin ,

Can you verify and share the ILM Policy Configuration?

GET _ilm/policy/kong-index-policy

Hi @Alex_Salgado-Elastic
Please find the below kong-index-policy configuration.

{
  "kong-index-policy": {
    "version": 8,
    "modified_date": "2024-07-25T14:31:51.684Z",
    "policy": {
      "phases": {
        "hot": {
          "min_age": "0ms",
          "actions": {
            "rollover": {
              "max_age": "370d",
              "max_primary_shard_size": "10gb"
            },
            "set_priority": {
              "priority": 100
            }
          }
        },
        "delete": {
          "min_age": "730d",
          "actions": {
            "delete": {
              "delete_searchable_snapshot": true
            }
          }
        }
      }
    },
    "in_use_by": {
      "indices": [
        "kong-2022-11-17-000050",
        "kong-2022-11-17-000051",
        "kong-2022-11-17-000052",
        "kong-2022-11-17-000053",
        "kong-2022-11-17-000014",
        "kong-2022-11-17-000058",
        "kong-2022-11-17-000015",
        "kong-2022-11-17-000059",
        "kong-2022-11-17-000016",
        "kong-2022-11-17-000017",
        "kong-2022-11-17-000054",
        "kong-2022-11-17-000010",
        "kong-2022-11-17-000011",
        "kong-2022-11-17-000055",
        "kong-2022-11-17-000056",
        "kong-2022-11-17-000012",
        "kong-2022-11-17-000057",
        "kong-2022-11-17-000013",
        "kong-2022-11-17-000007",
        "kong-2022-11-17-000008",
        "kong-2022-11-17-000009",
        "kong-2022-11-17-000040",
        "kong-2022-11-17-000041",
        "kong-2022-11-17-000042",
        "kong-2022-11-17-000003",
        "kong-2022-11-17-000047",
        "kong-2022-11-17-000048",
        "kong-2022-11-17-000004",
        "kong-2022-11-17-000049",
        "kong-2022-11-17-000005",
        "kong-2022-11-17-000006",
        "kong-2022-11-17-000043",
        "kong-2022-11-17-000044",
        "kong-2022-11-17-000001",
        "kong-2022-11-17-000045",
        "kong-2022-11-17-000002",
        "kong-2022-11-17-000046",
        "kong-2022-11-17-000030",
        "kong-2022-11-17-000031",
        "kong-2022-11-17-000036",
        "kong-2022-11-17-000037",
        "kong-2022-11-17-000038",
        "kong-2022-11-17-000039",
        "kong-2022-11-17-000032",
        "kong-2022-11-17-000033",
        "kong-2022-11-17-000034",
        "kong-2022-11-17-000035",
        "kong-2022-11-17-000029",
        "kong-2022-11-17-000061",
        "kong-2022-11-17-000062",
        "kong-2022-11-17-000063",
        "kong-2022-11-17-000020",
        "kong-2022-11-17-000064",
        "kong-2022-11-17-000060",
        "kong-2022-11-17-000025",
        "kong-2022-11-17-000026",
        "kong-2022-11-17-000027",
        "kong-2022-11-17-000028",
        "kong-2022-11-17-000065",
        "kong-2022-11-17-000021",
        "kong-2022-11-17-000022",
        "kong-2022-11-17-000066",
        "kong-2022-11-17-000067",
        "kong-2022-11-17-000023",
        "kong-2022-11-17-000068",
        "kong-2022-11-17-000024",
        "kong-2022-11-17-000018",
        "kong-2022-11-17-000019"
      ],
      "data_streams": [],
      "composable_templates": [
        "kong-index-template",
        "kong"
      ]
    }
  }
}

And what's the alias config?

GET /kong-2022-11-17-000001/_alias

GET /kong-2022-11-17-000001/_alias
{

  "kong-2022-11-17-000001": {

    "aliases": {

      "kong": {}

    }

  }

}

It might be that the alias kong is not configured as the write index. To fix this, you need to set the alias kong as the write index (is_write_index).

If it's not in production or something that would impact data loss, it might be better to delete the current alias to avoid conflicts and recreate a new one with the write option. Like this:

POST /_aliases
{
  "actions": [
    {
      "remove": {
        "index": "kong-2022-11-17-000001",
        "alias": "kong"
      }
    }
  ]
}

And recreate:

POST /_aliases
{
  "actions": [
    {
      "add": {
        "index": "kong-2022-11-17-000001",
        "alias": "kong",
        "is_write_index": true
      }
    }
  ]
}

What do you thing?

Hi @Alex_Salgado-Elastic Thanks for your support.

This is a Production environment. The current write index is kong-2022-11-17-000069

Please I need your advise, if there is any impact if I will proceed with above commands? Is the above command will make kong-2022-11-17-000001 writable instead of kong-2022-11-17-000069.

I can remember that kong-2022-11-17-000001 has been created long time back from another index using re-indexing process.

Hello @linux_admin ,

Given that this is a production environment, it is crucial to proceed with caution.

Here on the forum, we do our best to help users with their issues voluntarily, but this is by no means official customer support. If you need production support or official support, it can be contracted according to your license.

All the commands I suggested are meant to test and assess the current situation, which may involve various aspects beyond the point shown. In this case, the command will make kong-2022-11-17-000001 the writable index, redirecting all new write operations to this index instead of kong-2022-11-17-000069. This does not mean you should execute it exactly in this manner, as you have control over your infrastructure and know the potential side effects.

What I can suggest, and this applies to all cases, is Testing in a Staging Environment: If possible, try to replicate the changes in a staging environment first to observe any potential impacts and ensure everything works as expected. This way, you create a safe environment to test solutions and apply them according to your company's dynamics.

So, the idea here is that, with the help of the forum, you understand the problem as a whole, test in a safe environment, and create your deployment strategy in production under your responsibility.

Here you can find more detailed information on this topic: Index Lifecycle Management

I hope this helps!

Thank you so much @Alex_Salgado-Elastic for your support. It is highly appreciated!

I reproduced the issue in the Test environment and as you expected the new write operation will go to kong-2022-11-17-000001 instead of kong-2022-11-17-000069.

I apologize if I am asking simple question because I am new in ELK stack technology.

The issue is still exist. I appreciate if someone could help me to solve it, please.