Applying ILM to existing time-series indices

Hello! I feel as though I've been on a wild goose chase today. Would appreciate any guidance here. As context, when I originally set up ILM in my index templates, I missed the step where you have to specify a rollover_alias. I did that, which fixed the first ILM error we were getting around the rollover_alias not being defined (duh).

But not all is well. I've followed all the steps here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-index-lifecycle-management.html#ilm-gs-check-progress and I'm still getting this error on our existing indices: "illegal_state_exception: no rollover info found for [logstash-imu-logs-v1-2020.06.19] with alias [logstash-imu-logs], the index has not yet rolled over with that alias". Trying to figure out if the solution is reindexing or is there some way to get our existing logs on the ILM train?? I've retried the lifecycle step to no avail.

Here's the output of
GET /logstash-imu-logs-v1-2020.06.19/_ilm/explain?human

{
  "indices" : {
    "logstash-imu-logs-v1-2020.06.19" : {
      "index" : "logstash-imu-logs-v1-2020.06.19",
      "managed" : true,
      "policy" : "hot-warm",
      "lifecycle_date" : "2020-07-01T18:56:05.512Z",
      "lifecycle_date_millis" : 1593629765512,
      "age" : "5.24d",
      "phase" : "hot",
      "phase_time" : "2020-07-07T00:44:22.891Z",
      "phase_time_millis" : 1594082662891,
      "action" : "rollover",
      "action_time" : "2020-07-01T19:04:35.557Z",
      "action_time_millis" : 1593630275557,
      "step" : "ERROR",
      "step_time" : "2020-07-07T00:44:25.185Z",
      "step_time_millis" : 1594082665185,
      "failed_step" : "update-rollover-lifecycle-date",
      "is_auto_retryable_error" : true,
      "failed_step_retry_count" : 459,
      "step_info" : {
        "type" : "illegal_state_exception",
        "reason" : "no rollover info found for [logstash-imu-logs-v1-2020.06.19] with alias [logstash-imu-logs], the index has not yet rolled over with that alias",
        "stack_trace" : """java.lang.IllegalStateException: no rollover info found for [logstash-imu-logs-v1-2020.06.19] with alias [logstash-imu-logs], the index has not yet rolled over with that alias
	at org.elasticsearch.xpack.core.ilm.UpdateRolloverLifecycleDateStep.performAction(UpdateRolloverLifecycleDateStep.java:63)
	at org.elasticsearch.xpack.ilm.ExecuteStepsUpdateTask.execute(ExecuteStepsUpdateTask.java:97)
	at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702)
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324)
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219)
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73)
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151)
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:832)
"""
      },
      "phase_execution" : {
        "policy" : "hot-warm",
        "phase_definition" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "80gb",
              "max_age" : "7d"
            },
            "set_priority" : {
              "priority" : 100
            }
          }
        },
        "version" : 4,
        "modified_date" : "2020-06-09T16:08:30.780Z",
        "modified_date_in_millis" : 1591718910780
      }
    }
  }
}

This is the ILM policy we are using:

{
  "hot-warm" : {
    "version" : 4,
    "modified_date" : "2020-06-09T16:08:30.780Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "80gb",
              "max_age" : "7d"
            },
            "set_priority" : {
              "priority" : 100
            }
          }
        },
        "warm" : {
          "min_age" : "0ms",
          "actions" : {
            "allocate" : {
              "include" : { },
              "exclude" : { },
              "require" : {
                "data" : "warm"
              }
            },
            "set_priority" : {
              "priority" : 50
            }
          }
        }
      }
    }
  }
}

And finally, the relevant bits of the index template:

{
  "logstash-imu-logs-reindexed" : {
    "order" : 0,
    "index_patterns" : [
      "logstash-imu-logs-*",
      "logstash-imu-logs-reindexed-*"
    ],
    "settings" : {
      "index" : {
        "lifecycle" : {
          "name" : "hot-warm",
          "rollover_alias" : "logstash-imu-logs"
        },
        "routing" : {
          "allocation" : {
            "require" : {
              "data" : "hot"
            }
          }
        },
        "mapping" : {
          "total_fields" : {
            "limit" : "200"
          }
        },
        "refresh_interval" : "-1",
        "number_of_shards" : "1",
        "max_result_window" : "100000",
        "number_of_replicas" : "0"
      }
    },
    "mappings" : {
      "_meta" : { },
      "_source" : {
        "enabled" : true
      },
      "properties" : {
        ...
    "aliases" : { }
  }
}

I figured this out. Since we are using Elastic Cloud, it's totally sufficient to use the Kibana UI to set up ILM, which we did. Where I went wrong was using the 'rollover' action. Since we are creating daily logs via Logstash, it's simple enough to just define a hot-warm policy with no rollover action.

This is the description in the docs that saved me:
"If you are using daily indices (created by Logstash or another client) and you want to use the index lifecycle policy to manage aging data, you can disable the rollover action in the hot phase. You can then transition to the warm, cold, and delete phases based on the time of index creation." (https://www.elastic.co/guide/en/kibana/7.6/creating-index-lifecycle-policies.html#setting-a-rollover-action).

As for immediately moving indices from hot -> warm, create a policy that doesn't have a rollover phase, and set the minimum age for the warm phase to 0 days/hours after index creation. That way you can apply the policy to your existing indices that you want to move to warm (docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-with-existing-indices.html#ilm-existing-indices-apply)

If anyone from Elastic is reading this - it feels like info about using Logstash is very spread out in your docs. Would love it if you could flesh out your Logstash section more, especially for those of us using Elastic Cloud.

3 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.