Downsampling only works one time

Hello, I am running into an issue with elasticsearch where downsampling will only run a single time, and then subsequent runs seems to just create an empty index. I have tried several different docker versions and still see the same behavior. Here are the commands I am using to create the ILM policy and index template:

curl -X PUT "localhost:9200/_ingest/pipeline/kentik?pretty" -H 'Content-Type: application/json'' -d'
{
  "description": "Kentik-pipeline",
  "processors": [
{
      "date" : {
        "field" : "timestamp",
        "target_field" : "@timestamp",
        "formats" : ["epoch_second"]
      }
}
  ]
}'

curl -X PUT "http://localhost:9200/_ilm/policy/kentik?pretty" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_age": "60m",
            "max_primary_shard_size": "50gb"
          },
          "downsample": {
  	        "fixed_interval": "10m"
  	      }
        }
      },
      "warm": {
        "min_age": "60m",
        "actions": {
           "downsample": {
  	        "fixed_interval": "20m"
  	      }
          }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      },
      "cold": {
        "min_age": "1d",
        "actions": {
          "downsample": {
  	        "fixed_interval": "60m"
  	      }
          }
      }
  }
}}
'

curl -X PUT "localhost:9200/_index_template/kentik?pretty" -H 'Content-Type: application/json' -d'
{
  "index_patterns": ["kentik"],
  "data_stream": { },
  "template": {
    "settings": {
      "number_of_shards": 1,
      "number_of_replicas": 0,
      "index.mode": "time_series",
      "index.lifecycle.name": "kentik"
    },
    "mappings": {
      "dynamic":"false",
      "properties": {
        "device_name": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "device_id": {
          "type": "integer",
          "time_series_dimension": true
        },
        "dst_addr": {
          "type": "ip",
          "time_series_dimension": true
        },
        "dst_as": {
          "type": "long",
          "time_series_dimension": true
        },
        "dst_bgp_as_path": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "dst_bgp_comm": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "dst_eth_mac": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "dst_flow_tags": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "dst_geo": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "dst_geo_city": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "dst_geo_region": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "dst_nexthop": {
          "type": "ip",
          "time_series_dimension": true
        },
        "dst_nexthop_as": {
          "type": "long",
          "time_series_dimension": true
        },
        "dst_route_prefix": {
          "type": "integer",
          "time_series_dimension": true
        },
        "dst_second_asn": {
          "type": "long",
          "time_series_dimension": true
        },
        "dst_third_asn": {
          "type": "long",
          "time_series_dimension": true
        },
        "header_len": {
          "type": "integer",
          "time_series_dimension": true
        },
        "in_bytes": {
          "type": "integer",
          "time_series_dimension": true
        },
        "in_pkts": {
          "type": "integer",
          "time_series_dimension": true
        },
        "input_int_alias": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "input_int_desc": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "input_port": {
          "type": "integer",
          "time_series_dimension": true
        },
        "l4_dst_port": {
          "type": "integer",
          "time_series_dimension": true
        },
        "l4_src_port": {
          "type": "integer",
          "time_series_dimension": true
        },
        "out_bytes": {
          "type": "integer",
          "time_series_dimension": true
        },
        "out_pkts": {
          "type": "integer",
          "time_series_dimension": true
        },
        "output_int_alias": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "output_int_desc": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "output_port": {
          "type": "integer",
          "time_series_dimension": true
        },
        "protocol": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "src_eth_mac": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "src_flow_tags": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "src_geo": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "src_geo_city": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "src_geo_region": {
          "type": "keyword",
          "time_series_dimension": true
        },
        "src_nexthop": {
          "type": "ip",
          "time_series_dimension": true
        },
        "src_nexthop_as": {
          "type": "long",
          "time_series_dimension": true
        },
        "src_route_prefix": {
          "type": "integer",
          "time_series_dimension": true
        },
        "src_second_asn": {
          "type": "long",
          "time_series_dimension": true
        },
        "src_third_asn": {
          "type": "long",
          "time_series_dimension": true
        },
        "tcp_flags": {
          "type": "integer",
          "time_series_dimension": true
        },
        "tcp_rx": {
          "type": "integer",
          "time_series_dimension": true
        },
        "vlan_in": {
          "type": "integer",
          "time_series_dimension": true
        },
        "vlan_out": {
          "type": "integer",
          "time_series_dimension": true
        },
        "@timestamp": {
          "type": "date"
        }
      }
    }
  },
  "priority": 400
}'

I am sure I am missing something obvious in the ILM policy, any help would be appreciated. I have poured over the docs and can't figure out what I am missing. I found this message in the debug logs, but not sure if it's related:

{"@timestamp":"2025-03-06T04:49:41.155Z", "log.level":"DEBUG", "message":"unexpected exception during publication", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[95f83de4ffc6][masterService#updateTask][T#33]","log.logger":"org.elasticsearch.action.support.master.TransportMasterNodeAction","elasticsearch.cluster.uuid":"lIcG4Z1yQ1ex1Cgtip4C5w","elasticsearch.node.id":"YCvG6bO5T7eHEos7gAljcg","elasticsearch.node.name":"95f83de4ffc6","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.ResourceNotFoundException","error.message":"the task with id downsample-downsample-10m-.ds-kentik-2025.03.06-000002-0-10m and allocation id 7 doesn't exist","error.stack_trace":"org.elasticsearch.ResourceNotFoundException: the task with id downsampledownsample-10m-.ds-kentik-2025.03.06-000002-0-10m and allocation id 7 doesn't exist\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.persistent.PersistentTasksClusterService$4.execute(PersistentTasksClusterService.java:265)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService$UnbatchedExecutor.execute(MasterService.java:574)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.action.ActionListener.run(ActionListener.java:452)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.action.ActionListener.run(ActionListener.java:452)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023)\n\tat org.elasticsearch.server@8.17.2/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1575)\n"}

Hi, when using downsampling in series, you need keep them in multiple of 3.

You are using 10m, 20m, and 60m, it wont work, you need like
hot = 10min
warm = hot-downsampled x (3 x 1[minimum number of times to calculate]) = 30min
cold = warm-downsampled x (3 x 1) = 90m.

Also you need keep in mind some downsample, when is to higher, it will not plot the graph, you need them use a bigger range on the dashboard/panel to see it graph been plot.

In my employeer I am not doing downsample on HOT, but am doing on WARM and COLD.
hot = no downsampling
warm = 5min (move to here after 2 days)
cold = 15min (move to here after 30 days)

This way when the users look the graph they will want to see the granularity on events receiving "now", but when looking to old events, they will filter with a day, week, month in mind.

For a cold, each hour will have just 4 documents(60min/15min=4documents), and 96 documents per day.

I hope this can help you.