I'm having the same problem. Thought I would throw in my configs etc here too.
Similar to Unassigned shards, v2 (Unanswered).
Each morning at 9am (JST, which is UTC0) a new index for logstash is created in elasticsearch. At this time, these new shards are UNASSIGNED. They remain in this state until manually rerouted using the _cluster/reroute
api. This can take a few minutes, or anywhere up to an hour to settle down, during which time, kibana has no access to information, and logstash begins erroring while elasticsearch is basically unavailable.
{
"persistent": {
},
"transient": {
"cluster": {
"routing": {
"allocation": {
"enable": "all"
}
}
}
}
}
My elasticsearch.yml on all 3 nodes is the same, and basically unchanged:
cluster.name: prjsearch
node.name: stg-agselastic101z.stg.jp.local
node.max_local_storage_nodes: 1
path.conf: /etc/elasticsearch
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.73.11.105:9300", "10.73.12.105:9300", "10.73.13.105:9300"]
discovery.zen.minimum_master_nodes: 2
gateway.expected_nodes: 0
http.cors.allow-origin: "*"
http.cors.enabled: true
network.publish_address: 10.73.11.105
node.data: true
node.master: true
Any help is appreciated. Its frustrating to have to do manual operations on this cluster every morning.