All new indexes created have unassigned shards

I'm having the same problem. Thought I would throw in my configs etc here too.

Similar to Unassigned shards, v2 (Unanswered).

Each morning at 9am (JST, which is UTC0) a new index for logstash is created in elasticsearch. At this time, these new shards are UNASSIGNED. They remain in this state until manually rerouted using the _cluster/reroute api. This can take a few minutes, or anywhere up to an hour to settle down, during which time, kibana has no access to information, and logstash begins erroring while elasticsearch is basically unavailable.

  "persistent": {

  "transient": {
    "cluster": {
      "routing": {
        "allocation": {
          "enable": "all"

My elasticsearch.yml on all 3 nodes is the same, and basically unchanged: prjsearch
node.max_local_storage_nodes: 1
path.conf: /etc/elasticsearch /data/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200 false ["", "", ""]
discovery.zen.minimum_master_nodes: 2
gateway.expected_nodes: 0
http.cors.allow-origin: "*"
http.cors.enabled: true
network.publish_address: true
node.master: true

Any help is appreciated. Its frustrating to have to do manual operations on this cluster every morning.