ILM rollover failing

OK, I am trying to figure out my ILM situation again. I've reset my policies, template and aliases through last week and I'm getting close; now it's at least trying to rollover and failing. My index is (very creatively) named "logstash", and

GET /logstash/_ilm/explain

gives me

 {
  "indices" : {
    "logstash" : {
      "index" : "logstash",
      "managed" : true,
      "policy" : "logstash-policy",
      "lifecycle_date_millis" : 1563385680162,
      "phase" : "hot",
      "phase_time_millis" : 1565012470542,
      "action" : "rollover",
      "action_time_millis" : 1564595597695,
      "step" : "ERROR",
      "step_time_millis" : 1565013006423,
      "failed_step" : "check-rollover-ready",
      "step_info" : {
        "type" : "illegal_argument_exception",
        "reason" : "index name [logstash] does not match pattern '^.*-\\d+$'",
        "stack_trace" : """
java.lang.IllegalArgumentException: index name [logstash] does not match pattern '^.*-\d+$'
	at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.generateRolloverIndexName(TransportRolloverAction.java:245)
	at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.masterOperation(TransportRolloverAction.java:128)
	at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.masterOperation(TransportRolloverAction.java:70)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction.masterOperation(TransportMasterNodeAction.java:127)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$1.doRun(TransportMasterNodeAction.java:200)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:193)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:197)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:161)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:138)
	at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:58)
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:145)
	at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:123)
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:143)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:121)
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:64)
	at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83)
	at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:392)
	at org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin(ClientHelper.java:89)
	at org.elasticsearch.xpack.core.ClientHelper.executeWithHeadersAsync(ClientHelper.java:152)
	at org.elasticsearch.xpack.indexlifecycle.LifecyclePolicySecurityClient.doExecute(LifecyclePolicySecurityClient.java:51)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:392)
	at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1212)
	at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.rolloverIndex(AbstractClient.java:1714)
	at org.elasticsearch.xpack.core.indexlifecycle.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:115)
	at org.elasticsearch.xpack.indexlifecycle.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:133)
	at org.elasticsearch.xpack.indexlifecycle.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:270)
	at org.elasticsearch.xpack.indexlifecycle.IndexLifecycleService.triggered(IndexLifecycleService.java:213)
	at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:168)
	at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:196)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:835)

"""
      },
      "phase_execution" : {
        "policy" : "logstash-policy",
        "phase_definition" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "1d"
            },
            "set_priority" : {
              "priority" : 100
            }
          }
        },
        "version" : 1,
        "modified_date_in_millis" : 1564582960143
      }
    }
  }
}

It seems obvious my naming is running foul somewhere but I can't quite figure out where.... assistance would be appreciated!

I should mention, as it may be relevant, that it's presently north of 4TB and 2.3B objects. It's not successfully rolled over since April, although it HAS halted and I've deleted it after it filled up twice so far since then. Not sure the size will make this process harder now, but, yeah, in case it's relevant I thought I'd share.... :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.