Indexes not rolled over or deleted

Hello All,

I've been stuck with an issue that indexes are not getting deleted after what is defined in the ILM Policy. I have the same issue on various ES clusters, so clearly I'm doing something wrong.

I have the following index template:

{
  "index_templates" : [
    {
      "name" : "logstash-template",
      "index_template" : {
        "index_patterns" : [
          "logstash-*"
        ],
        "template" : {
          "settings" : {
            "index" : {
              "lifecycle" : {
                "name" : "logstash-lifecycle-policy",
                "rollover_alias" : "logstash"
              },
              "number_of_shards" : "3",
              "number_of_replicas" : "0"
            }
          }
        },
        "composed_of" : [ ],
        "priority" : 1,
        "data_stream" : {
          "hidden" : false
        }
      }
    }
  ]
}

This is the ILM Policy:

{
  "logstash-lifecycle-policy" : {
    "version" : 4,
    "modified_date" : "2022-06-02T12:40:54.148Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_primary_shard_size" : "25gb",
              "max_age" : "1d"
            }
          }
        },
        "delete" : {
          "min_age" : "14d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    },
    "in_use_by" : {
      "indices" : [
        ".ds-logstash-2022.06.07-2022.06.08-000002",
        ".ds-logstash-2022.06.07-2022.06.07-000001",
        ".ds-logstash-2022.06.04-2022.06.07-000004",
        ".ds-logstash-2022.06.04-2022.06.09-000006",
        ".ds-logstash-2022.05.31-2022.05.31-000001",
        ".ds-logstash-2022.05.27-2022.05.27-000001",
        ".ds-logstash-2022.06.07-2022.06.09-000003",
        ".ds-logstash-2022.06.04-2022.06.08-000005",
        ".ds-logstash-2022.05.29-2022.05.29-000001",
        ".ds-logstash-2022.06.03-2022.06.09-000007",
        ".ds-logstash-2022.06.03-2022.06.06-000004",
        ".ds-logstash-2022.06.06-2022.06.07-000002",
        ".ds-logstash-2022.06.03-2022.06.05-000003",
        ".ds-logstash-2022.06.06-2022.06.06-000001",
        ".ds-logstash-2022.05.30-2022.05.30-000001",
        ".ds-logstash-2022.06.06-2022.06.09-000004",
        ".ds-logstash-2022.06.09-2022.06.09-000001",
        ".ds-logstash-2022.06.03-2022.06.08-000006",
        ".ds-logstash-2022.06.06-2022.06.08-000003",
        ".ds-logstash-2022.05.26-2022.05.26-000001",
        ".ds-logstash-2022.06.03-2022.06.07-000005",
        ".ds-logstash-2022.05.20-2022.05.20-000001",
        ".ds-logstash-2022.06.03-2022.06.03-000001",
        ".ds-logstash-2022.05.23-2022.05.23-000001",
        ".ds-logstash-2022.05.19-2022.05.19-000001",
        ".ds-logstash-2022.06.03-2022.06.04-000002",
        ".ds-logstash-2022.05.28-2022.05.28-000001",
        ".ds-logstash-2022.06.05-2022.06.09-000005",
        ".ds-logstash-2022.06.05-2022.06.08-000004",
        ".ds-logstash-2022.06.08-2022.06.09-000002",
        ".ds-logstash-2022.06.05-2022.06.05-000001",
        ".ds-logstash-2022.05.22-2022.05.22-000001",
        ".ds-logstash-2022.05.25-2022.05.25-000001",
        ".ds-logstash-2022.06.08-2022.06.08-000001",
        ".ds-logstash-2022.06.05-2022.06.07-000003",
        ".ds-logstash-2022.06.05-2022.06.06-000002",
        ".ds-logstash-2022.05.18-2022.05.18-000001",
        ".ds-logstash-2022.06.02-2022.06.02-000001",
        ".ds-logstash-2022.06.04-2022.06.04-000001",
        ".ds-logstash-2022.05.21-2022.05.21-000001",
        ".ds-logstash-2022.06.04-2022.06.06-000003",
        ".ds-logstash-2022.05.24-2022.05.24-000001",
        ".ds-logstash-2022.06.04-2022.06.05-000002",
        ".ds-logstash-2022.05.17-2022.05.17-000001",
        ".ds-logstash-2022.06.01-2022.06.01-000001"
      ],
      "data_streams" : [
        "logstash-2022.05.18",
        "logstash-2022.06.09",
        "logstash-2022.05.19",
        "logstash-2022.06.08",
        "logstash-2022.06.05",
        "logstash-2022.06.04",
        "logstash-2022.06.07",
        "logstash-2022.05.17",
        "logstash-2022.06.06",
        "logstash-2022.06.01",
        "logstash-2022.06.03",
        "logstash-2022.06.02",
        "logstash-2022.05.30",
        "logstash-2022.05.31",
        "logstash-2022.05.29",
        "logstash-2022.05.25",
        "logstash-2022.05.26",
        "logstash-2022.05.27",
        "logstash-2022.05.28",
        "logstash-2022.05.21",
        "logstash-2022.05.22",
        "logstash-2022.05.23",
        "logstash-2022.05.24",
        "logstash-2022.05.20"
      ],
      "composable_templates" : [
        "logstash-template"
      ]
    }
  }
}

ILM is running:

{
  "operation_mode" : "RUNNING"
}

The status is:

    ".ds-logstash-2022.05.20-2022.05.20-000001" : {
      "index" : ".ds-logstash-2022.05.20-2022.05.20-000001",
      "managed" : true,
      "policy" : "logstash-lifecycle-policy",
      "lifecycle_date_millis" : 1653004800018,
      "age" : "20.52d",
      "phase" : "hot",
      "phase_time_millis" : 1653004800202,
      "action" : "rollover",
      "action_time_millis" : 1653004800402,
      "step" : "check-rollover-ready",
      "step_time_millis" : 1653004800402,
      "phase_execution" : {
        "policy" : "logstash-lifecycle-policy",
        "phase_definition" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_primary_shard_size" : "25gb"
            },
            "set_priority" : {
              "priority" : 100
            }
          }
        },
        "version" : 1,
        "modified_date_in_millis" : 1645165181695
      }
    }

I expected this index to be deleted already. Any tips on what I'm missing here?

From the status output, the index is still waiting to rollover and the only condition is "max_primary_shard_size" : "25gb" but the ILM policy also includes "max_age" : "1d". Was this second condition added after the index was created? (index creation was "2022.05.20" but ILM policy was last updated on "2022-06-02")

If that is the case, then the index has not yet reached 25gb/shard to trigger the rollover and start counting towards the delete phase. You can use the rollover API against this index, including both conditions - assuming the "1d" condition will trigger the rollover. You can then check the new write index ILM status to confirm it is picking up both conditions.

Repeat as needed for any other indices that are showing the same behavior.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.