Data streams stuck in frozen searchable_snapshot phase (wait state inconsistent with indices status)

This request follows a previous ticket that was never really answered.

ES 8.8.2 - free trial in local - paid enterprise licence on other environment

Same problem, I have created a data stream with ILM with phases ranging from Hot, Frozen to Deleted.

The backing indices are successfully moved into Frozen state but never continue into the Deleted phase.

Why are the indices not moving to deleted state?

Add some settings ans lifecycle policy with aws s3 searchable snapshot.

PUT _cluster/settings
{
  "transient": {
    "indices.lifecycle.poll_interval": "30s"
  }
}

PUT _ilm/policy/test-policy
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "set_priority": {
            "priority": 100
          },
          "rollover": {
            "max_primary_shard_size": "50mb",
            "max_age": "10s"
          }
        }
      },
      "frozen": {
        "min_age": "5m",
        "actions": {
          "searchable_snapshot": {
            "snapshot_repository": "snapshot_s3_repository"
          }
        }
      },
      "delete": {
        "min_age": "15m",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

Create index template with lifecylcle and data streams. Add some data.

PUT _index_template/test-template
{
  "index_patterns": ["test-index*"],
  "data_stream": { },
  "template": {
    "mappings": {
      "properties": {
        "@timestamp": {
          "type": "date",
          "format": "date_optional_time||epoch_millis"
        }
      }
    },
    "settings": {
      "lifecycle": {
        "name": "test-policy"
      },
      "number_of_shards": 1,
      "number_of_replicas": 0
    }
  } 
}



POST test-index-1/_doc
{
  "field1": "someValue2",
  "@timestamp": 1689718060023
}

You can find elasticsearch.log here after one hour:

ILM Explain

GET test-index-1/_ilm/explain
{
  "indices": {
    ".ds-test-index-1-2023.07.18-000001": {
      "index": ".ds-test-index-1-2023.07.18-000001",
      "managed": true,
      "policy": "test-policy",
      "index_creation_date_millis": 1689718108382,
      "time_since_index_creation": "33.78m",
      "lifecycle_date_millis": 1689718131813,
      "age": "33.39m",
      "phase": "frozen",
      "phase_time_millis": 1689718461739,
      "action": "searchable_snapshot",
      "action_time_millis": 1689718461739,
      "step": "wait-for-index-color",
      "step_time_millis": 1689718524472,
      "repository_name": "snapshot_s3_repository",
      "snapshot_name": "2023.07.18-.ds-test-index-1-2023.07.18-000001-test-policy-dgun9fltszwfctmnc1zj-a",
      "step_info": {
        "message": "index is not green; not all shards are active"
      },
      "phase_execution": {
        "policy": "test-policy",
        "phase_definition": {
          "min_age": "5m",
          "actions": {
            "searchable_snapshot": {
              "snapshot_repository": "snapshot_s3_repository",
              "force_merge_index": true
            }
          }
        },
        "version": 1,
        "modified_date_in_millis": 1689717832957
      }
    },
    ".ds-test-index-1-2023.07.18-000002": {
      "index": ".ds-test-index-1-2023.07.18-000002",
      "managed": true,
      "policy": "test-policy",
      "index_creation_date_millis": 1689718131896,
      "time_since_index_creation": "33.39m",
      "lifecycle_date_millis": 1689718131896,
      "age": "33.39m",
      "phase": "hot",
      "phase_time_millis": 1689718132431,
      "action": "rollover",
      "action_time_millis": 1689718133239,
      "step": "check-rollover-ready",
      "step_time_millis": 1689718133239,
      "phase_execution": {
        "policy": "test-policy",
        "phase_definition": {
          "min_age": "0ms",
          "actions": {
            "set_priority": {
              "priority": 100
            },
            "rollover": {
              "max_age": "10s",
              "max_primary_shard_size": "50mb"
            }
          }
        },
        "version": 1,
        "modified_date_in_millis": 1689717832957
      }
    }
  }
}

My index still green

GET /_cat/indices/.ds-test-index-1-2023.07.18-000001?v
health status index                              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .ds-test-index-1-2023.07.18-000001 2ff-u2EuQGmSQOyLB9qKKQ   1   0          1            0      4.3kb          4.3kb

I don't really see why I have this "index is not green; not all shards are active" when everything is green and why the passage to delete is not done maybe I have to open an issue on the Github side?
But I have a feeling it's not the first, there seem to be similar issues here:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.