ILM doesn't delete indices

Hi All,

I have a problem with ILM policy that doesn't delete my old indices. I feel like I've checked everything what I could (and I know), but I can't find out why it is not deleting them.
I will appreciate your help, please look at the output below:

GET /_ilm/policy/someindex-logs-policy
{
  "someindex-logs-policy": {
    "version": 2,
    "modified_date": "2023-01-25T23:54:02.844Z",
    "policy": {
      "phases": {
        "hot": {
          "min_age": "0ms",
          "actions": {
            "rollover": {
              "max_primary_shard_size": "50gb",
              "max_age": "7d"
            },
            "set_priority": {
              "priority": 100
            }
          }
        },
        "delete": {
          "min_age": "7d",
          "actions": {
            "delete": {
              "delete_searchable_snapshot": true
            }
          }
        }
      }
    },
    "in_use_by": {
      "indices": [
        ".ds-someindex-logs-2023.01.22-2023.01.22-000001",
        ".ds-someindex-logs-2023.01.06-2023.01.06-000001",
        ".ds-someindex-logs-2023.01.25-2023.01.25-000001",
        ".ds-someindex-logs-2023.01.10-2023.01.10-000001",
        ".ds-someindex-logs-2023.01.15-2023.01.15-000001",
        ".ds-someindex-logs-2023.01.18-2023.01.18-000001",
        ".ds-someindex-logs-2023.01.16-2023.01.16-000001",
        ".ds-someindex-logs-2023.01.20-2023.01.20-000001",
        ".ds-someindex-logs-2023.01.19-2023.01.19-000001",
        ".ds-someindex-logs-2023.01.09-2023.01.09-000001",
        ".ds-someindex-logs-2023.01.13-2023.01.13-000001",
        ".ds-someindex-logs-2023.01.14-2023.01.14-000001",
        ".ds-someindex-logs-2023.01.23-2023.01.23-000001",
        ".ds-someindex-logs-2023.01.07-2023.01.07-000001",
        ".ds-someindex-logs-2023.01.11-2023.01.11-000001",
        ".ds-someindex-logs-2023.01.08-2023.01.08-000001",
        ".ds-someindex-logs-2023.01.12-2023.01.12-000001",
        ".ds-someindex-logs-2023.01.17-2023.01.17-000001",
        ".ds-someindex-logs-2023.01.21-2023.01.21-000001",
        ".ds-someindex-logs-2023.01.05-2023.01.05-000001",
        ".ds-someindex-logs-2023.01.24-2023.01.24-000001"
      ],
      "data_streams": [
        "someindex-logs-2023.01.10",
        "someindex-logs-2023.01.11",
        "someindex-logs-2023.01.12",
        "someindex-logs-2023.01.13",
        "someindex-logs-2023.01.14",
        "someindex-logs-2023.01.15",
        "someindex-logs-2023.01.16",
        "someindex-logs-2023.01.17",
        "someindex-logs-2023.01.18",
        "someindex-logs-2023.01.19",
        "someindex-logs-2023.01.20",
        "someindex-logs-2023.01.21",
        "someindex-logs-2023.01.22",
        "someindex-logs-2023.01.23",
        "someindex-logs-2023.01.24",
        "someindex-logs-2023.01.02",
        "someindex-logs-2023.01.25",
        "someindex-logs-2023.01.03",
        "someindex-logs-2023.01.05",
        "someindex-logs-2023.01.06",
        "someindex-logs-2023.01.07",
        "someindex-logs-2023.01.08",
        "someindex-logs-2023.01.09",
        "someindex-logs-2022.12.28"
      ],
      "composable_templates": [
        "someindex-logs-template"
      ]
    }
  }
}
GET /.ds-someindex-logs-2023.01.15-2023.01.15-000001/_ilm/explain?human
{
  "indices": {
    ".ds-someindex-logs-2023.01.15-2023.01.15-000001": {
      "index": ".ds-someindex-logs-2023.01.15-2023.01.15-000001",
      "managed": true,
      "policy": "someindex-logs-policy",
      "index_creation_date": "2023-01-15T03:00:36.744Z",
      "index_creation_date_millis": 1673751636744,
      "time_since_index_creation": "10.91d"
    }
  }
}
GET /.ds-someindex-logs-2023.01.06-2023.01.06-000001/_ilm/explain?human
{
  "indices": {
    ".ds-someindex-logs-2023.01.06-2023.01.06-000001": {
      "index": ".ds-someindex-logs-2023.01.06-2023.01.06-000001",
      "managed": true,
      "policy": "someindex-logs-policy",
      "index_creation_date": "2023-01-06T03:00:05.668Z",
      "index_creation_date_millis": 1672974005668,
      "time_since_index_creation": "19.92d"
    }
  }
}

Hello @obol89 , it seems like the most probably cause is:

Essentially, you are rolling over the indices when they reach an age of 7days and trying to delete the indices which have reached 7 days of age. This implies, on 8th day, the index will be rolled over, both new and rolled index will again start from age of 0d.
Have you checked if you have old rolled over indexes as well in your cluster or is it happening for only new indices ?
Please note, when index is rolled over, the index name still remains same (unless you have specified daily or weekly indices in naming convention), hence time_since_index_creation is increasing, but essentially is a new index and rolled over is the older one.

It looks like you are creating one data stream per day instead of letting a single data stream roll over. It also looks like you have a single policy associated with all these data streams. This is not how it is supposed to work so I am not sure what the exopected behaviour is in this situation.

Why are you using it this way? Is it to make sure to get each index covering exactly one day?

How are you indexing into Elasticsearch?

Yes, I think you are correct. I had this configuration in my filebeat.yml:

  - index: "some-logs-%{+yyyy.MM.dd}"
    when.contains:
      log.file.path: "some"

I've removed this part -%{+yyyy.MM.dd} and I'm waiting to see what will happen.

To answer your question. It was more like misunderstanding of the documentation than expected behaviour to create one index per day.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.