Unassigned shards with new frozen tier

I have spun up a brand new cluster on elastic cloud to test 7.13 and I have data_hot and data_frozen tiers.

I've changed ILM policy for logs and metrics so that data that is older than a few hours (just to test) is moved to frozen tier however (and this is the third cluster I have tried this on) after the indexes move to frozen tier the cluster goes to red state with unassigned shards?

What's your policy?
Can you run an explain on it?
What about the output from GET _cluster/allocation/explain?

Hmmm doesn't look like the data_hot node is actually assigned the role?

{
  "can_allocate": "no",
  "index": ".ds-metrics-system.process-default-2021.05.27-000004",
  "node_allocation_decisions": [
    {
      "node_decision": "no",
      "transport_address": "10.46.88.21:19990",
      "weight_ranking": 1,
      "node_name": "instance-0000000003",
      "node_id": "VXPR1rrYSduESDNtN5pq3Q",
      "deciders": [
        {
          "decision": "NO",
          "explanation": "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [9.991089502970379%]",
          "decider": "disk_threshold"
        },
        {
          "decision": "NO",
          "explanation": "index has a preference for tiers [data_hot] and node does not meet the required [data_hot] tier",
          "decider": "data_tier"
        },
        {
          "decision": "NO",
          "explanation": "this node's data roles are exactly [data_frozen] so it may only hold shards from frozen searchable snapshots, but this index is not a frozen searchable snapshot",
          "decider": "dedicated_frozen_node"
        }
      ],
      "node_attributes": {
        "server_name": "instance-0000000003.74708265f72e4930aa084f14d02f51c3",
        "availability_zone": "westus2-2",
        "transform.node": "false",
        "region": "unknown-region",
        "instance_configuration": "azure.es.datafrozen.lsv2",
        "xpack.installed": "true",
        "logical_availability_zone": "zone-0",
        "data": "frozen"
      }
    },
    {
      "node_decision": "no",
      "transport_address": "10.46.88.114:19646",
      "weight_ranking": 2,
      "node_name": "instance-0000000000",
      "node_id": "lny_G1ToTQC20sYpdheryw",
      "deciders": [
        {
          "decision": "NO",
          "explanation": "a copy of this shard is already allocated to this node [[.ds-metrics-system.process-default-2021.05.27-000004][0], node[lny_G1ToTQC20sYpdheryw], [P], s[STARTED], a[id=Z_9a9u8gS-WC8zXuInRwNw]]",
          "decider": "same_shard"
        }
      ],
      "node_attributes": {
        "server_name": "instance-0000000000.74708265f72e4930aa084f14d02f51c3",
        "availability_zone": "westus2-3",
        "transform.node": "true",
        "region": "unknown-region",
        "instance_configuration": "azure.data.highio.l32sv2",
        "xpack.installed": "true",
        "logical_availability_zone": "zone-0",
        "data": "hot"
      }
    }
  ],
  "current_state": "unassigned",
  "shard": 0,
  "primary": false,
  "allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
  "unassigned_info": {
    "last_allocation_status": "no_attempt",
    "reason": "INDEX_CREATED",
    "at": "2021-05-27T22:11:31.868Z"
  }
}

My ILM policy

PUT _ilm/policy/logs
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "shrink": {
            "number_of_shards": 1
          },
          "rollover": {
            "max_primary_shard_size": "3gb",
            "max_age": "3h"
          }
        }
      },
      "frozen": {
        "min_age": "1h",
        "actions": {
          "searchable_snapshot": {
            "snapshot_repository": "found-snapshots",
            "force_merge_index": true
          }
        }
      }
    }
  }
}

Check that as well.

It is a brand new cluster with barely 1GB data :

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.