Index life cycle policy not deleting the index when reached the defined size

Hello

I have vector agent running on a k8s. it creates a data stream and indexes.

I created ILM with only hot and delete phase. It should keep the index in hot phase until it reaches the defined size [100MB] and then rollover and delete previous index right away.

below is how the ILM gets attached to the index.

I created component templates with ILM ---> attached to template with data stream name pattern
--> vector creates data stream --> it creates index and ILM is attached.

ILM:

{
  "vector_prod_ilm": {
    "version": 16,
    "modified_date": "2023-03-24T15:23:34.142Z",
    "policy": {
      "phases": {
        "hot": {
          "min_age": "0ms",
          "actions": {
            "rollover": {
              "max_size": "100mb"
            }
          }
        },
        "delete": {
          "min_age": "0d",
          "actions": {
            "delete": {
              "delete_searchable_snapshot": true
            }
          }
        }
      }
    },
    "in_use_by": {
      "indices": [
        ".ds-vector-kubernetes_logs-prod-2023.03.24-000001"
      ],
      "data_streams": [
        "vector-kubernetes_logs-prod"
      ],
      "composable_templates": [
        "vector_prod_datastream_template"
      ]
    }
  }
}

component_templates:

{
  "component_templates": [
    {
      "name": "vector_prod_datastream_component_template",
      "component_template": {
        "template": {
          "settings": {
            "index": {
              "lifecycle": {
                "name": "vector_prod_ilm"
              }
            }
          },
          "aliases": {
            "my_alias": {}
          }
        }
      }
    }
  ]
}

template:

{
  "index_templates": [
    {
      "name": "vector_prod_datastream_template",
      "index_template": {
        "index_patterns": [
          "vector-kubernetes_logs-prod*"
        ],
        "composed_of": [
          "vector_prod_datastream_component_template"
        ],
        "priority": 200,
        "data_stream": {
          "hidden": false,
          "allow_custom_routing": false
        }
      }
    }
  ]
}

datastream created by victor:

{
  "data_streams": [
    {
      "name": "vector-kubernetes_logs-prod",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-vector-kubernetes_logs-prod-2023.03.24-000001",
          "index_uuid": "vK8qLxTkS9aNW81SAHY9aQ"
        }
      ],
      "generation": 1,
      "status": "GREEN",
      "template": "vector_prod_datastream_template",
      "ilm_policy": "vector_prod_ilm",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false
    }
  ]
}

index:

GET /.ds-vector-kubernetes_logs-prod-2023.03.24-000001/_ilm/explain?human

{
  "indices": {
    ".ds-vector-kubernetes_logs-prod-2023.03.24-000001": {
      "index": ".ds-vector-kubernetes_logs-prod-2023.03.24-000001",
      "managed": true,
      "policy": "vector_prod_ilm",
      "index_creation_date": "2023-03-24T15:23:56.036Z",
      "index_creation_date_millis": 1679671436036,
      "time_since_index_creation": "4.41m",
      "lifecycle_date": "2023-03-24T15:23:56.036Z",
      "lifecycle_date_millis": 1679671436036,
      "age": "4.41m",
      "phase": "hot",
      "phase_time": "2023-03-24T15:23:56.260Z",
      "phase_time_millis": 1679671436260,
      "action": "rollover",
      "action_time": "2023-03-24T15:23:56.460Z",
      "action_time_millis": 1679671436460,
      "step": "check-rollover-ready",
      "step_time": "2023-03-24T15:23:56.460Z",
      "step_time_millis": 1679671436460,
      "phase_execution": {
        "policy": "vector_prod_ilm",
        "phase_definition": {
          "min_age": "0ms",
          "actions": {
            "rollover": {
              "max_size": "100mb"
            }
          }
        },
        "version": 16,
        "modified_date": "2023-03-24T15:23:34.142Z",
        "modified_date_in_millis": 1679671414142
      }
    }
  }
}

index was not deleted:

What could be the issue. Can anyone take a look?

Thanks.

The index has not yet rolled over as it has not reached the configured size of 100MB. Note that the max_size parameter only takes primary shard size into account and the index you showed has a primary and replica shard sized 184.49MB, which means the primary shard is a bit over 92MB in size.

Hey Christian, thanks for your reply. I changed the ILM policy and kept it to 1GB max size,
and it's weird that it rolled over when Primary storage size reached 1.6gb even though max size in ILM is set to 1GB and the index number also jumped from 9 to 12, not sure why.

ILM

{
  "vector_prod_ilm": {
    "version": 17,
    "modified_date": "2023-03-24T16:19:00.496Z",
    "policy": {
      "phases": {
        "hot": {
          "min_age": "0ms",
          "actions": {
            "rollover": {
              "max_size": "1gb"
            }
          }
        },
        "delete": {
          "min_age": "0d",
          "actions": {
            "delete": {
              "delete_searchable_snapshot": true
            }
          }
        }
      }
    },
    "in_use_by": {
      "indices": [
        ".ds-vector-kubernetes_logs-prod-2023.03.24-000012",
        ".ds-vector-kubernetes_logs-prod-2023.03.24-000009"
      ],
      "data_streams": [
        "vector-kubernetes_logs-prod"
      ],
      "composable_templates": [
        "vector_prod_datastream_template"
      ]
    }
  }
}

Is it because it does document compression or something? because I checked in like 2 min later and now the size decreased to 1.26GB primary storage from 1.6GB in above image

The size of an index can fluctuate over time as merging takes place and new, merged segments are created before old ones are removed. I believe the size calculation averages out the size over time in order to not trigger consistently too early.

I understand. Any specific reason for index ending number to not follow the order? 009 index should rollover to 010 and so on.

That I do not know.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.