Shard Awareness and Allocation

I have upgraded the RAM on my data nodes to 64GB and before restarting I decided to start using shard awareness.

I added the following to my elasticsearch.yml:
node.master: false
node.data: true
node.box_type: hot
node.zone: hot_zone
node.rack_id: OP-01-PM-3819

The thought here is because I have my ES nodes visualized on physical servers I wanted to make sure that no shards were on the same physical host.

I have 12 nodes on SSD's with 64 GB ram and 6TB HDDs with node.zone: hot box_type: hot and 3 nodes on spinning disk with 32 GB RAM and 64TB HDDs with node.zone: cold box_type: hot

The idea is that all new indexes are written to hot nodes and before aging out old indices I move them to cold nodes and close them.

When I restarted the cluster all shards have not reallocated themselves and my cluster is VERY unbalanced.

my cluster settings:

{
  "persistent": {
    "cluster": {
      "routing": {
        "rebalance": {
          "enable": "all"
        },
        "allocation": {
          "disable_allocation": "false",
          "allow_rebalance": "indices_primaries_active",
          "awareness": {
            "attributes": "rack_id,zone"
          },
          "enable": "all"
        }
      }
    },
    "threadpool": {
      "bulk": {
        "queue_size": "10000"
      }
    }
  },
  "transient": {
    "cluster": {
      "routing": {
        "rebalance": {
          "enable": "all"
        },
        "allocation": {
          "awareness": {
            "attributes": "rack_id,zone"
          },
          "allow_rebalance": "indices_primaries_active",
          "enable": "all"
        }
      }
    }
  }
}

What should I have set for the shards to reallocate appropriately.

How unevenly?
What does _cat/allocation look like?

after a night of sitting and splitting my 12 hot nodes into 4 zones "node.zone hot_zone1" "node.zone: hot_zone2" etc. I get the following:

269   1.6tb  1.7tb  4.2tb  5.9tb 29 10.1.55.16 10.1.55.16 WORKER_NODE_9  
197   1.7tb  2.1tb  3.7tb  5.9tb 36 10.1.55.18 10.1.55.18 WORKER_NODE_11 
  0      0b 25.5tb 43.9tb 69.4tb 36 10.1.55.32 10.1.55.32 STORAGE_NODE_2 
187     1tb  1.1tb  4.7tb  5.9tb 19 10.1.55.15 10.1.55.15 WORKER_NODE_8  
126   1.1tb 25.5tb 43.9tb 69.4tb 36 10.1.55.33 10.1.55.33 STORAGE_NODE_3 
222   1.1tb  1.7tb  4.1tb  5.9tb 29 10.1.55.13 10.1.55.13 WORKER_NODE_6  
268   1.7tb  2.6tb  3.3tb  5.9tb 44 10.1.55.11 10.1.55.11 WORKER_NODE_4  
188   1.6tb  1.6tb  4.3tb  5.9tb 27 10.1.55.12 10.1.55.12 WORKER_NODE_5  
188   1.2tb  1.7tb  4.2tb  5.9tb 28 10.1.55.19 10.1.55.19 WORKER_NODE_12 
258     2tb  2.9tb    4tb  6.9tb 42 10.1.55.8  10.1.55.8  WORKER_NODE_1  
 72 579.4gb 25.5tb 43.9tb 69.4tb 36 10.1.55.31 10.1.55.31 STORAGE_NODE_1 
185   1.4tb  2.5tb  3.4tb  5.9tb 42 10.1.55.10 10.1.55.10 WORKER_NODE_3  
185   1.4tb  2.7tb  4.1tb  6.9tb 40 10.1.55.9  10.1.55.9  WORKER_NODE_2  
235   1.9tb  2.4tb  3.4tb  5.9tb 41 10.1.55.17 10.1.55.17 WORKER_NODE_10 
186   1.2tb  2.1tb  3.8tb  5.9tb 35 10.1.55.14 10.1.55.14 WORKER_NODE_7  
 70                                                       UNASSIGNED