No_shard_available_action_exception

Hello
My test server is generating an error which does not allow me to continue with the ingestion, it is a single server, in it is all the ELK solution.

initially this is the error message

With this I find the datastream with problems “.ds-logs-vmware-production-2024.11.11-000016” but the truth is I don't know how to fix it.

I ran the GET _cluster/allocation/explain and it allowed me to identify that the folder where the indexes are stored was already at 94% of its capacity which caused the problem.

{
  "note": "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
  "index": ".ds-logs-xxxxxxxxx-produccion-2024.11.12-000177",
  "shard": 0,
  "primary": true,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "INDEX_CREATED",
    "at": "2024-11-12T16:53:03.731Z",
    "last_allocation_status": "no"
  },
  "can_allocate": "no",
  "allocate_explanation": "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  "node_allocation_decisions": [
    {
      "node_id": "tVyZ6I3RSxqVqTLLpCTYLA",
      "node_name": "node-1",
      "transport_address": "172.26.6.6:9300",
      "node_attributes": {
        "ml.allocated_processors": "4",
        "ml.machine_memory": "16769712128",
        "transform.config_version": "10.0.0",
        "xpack.installed": "true",
        "ml.config_version": "12.0.0",
        "ml.max_jvm_size": "8384413696",
        "ml.allocated_processors_double": "4.0"
      },
      "roles": [
        "data",
        "data_cold",
        "data_content",
        "data_frozen",
        "data_hot",
        "data_warm",
        "ingest",
        "master",
        "ml",
        "remote_cluster_client",
        "transform"
      ],
      "node_decision": "no",
      "weight_ranking": 1,
      "deciders": [
        {
          "decider": "disk_threshold",
          "decision": "NO",
          "explanation": "the node is above the high watermark cluster setting [cluster.routing.allocation.disk.watermark.high=90%], having less than the minimum required [5.8gb] free space, actual free: [3.6gb], actual used: [93.7%]"
        }
      ]
    }
  ]
}