Unassigned shards on elasticloud system indexes with 0 replica settings

Using 1 node with number_of_replicas": "0", all beats indexes working properly, except system:

.monitoring-kibana
.watcher-history
.watches

It seem, that all system indexes have unassigned replicas, but all my settings seems correct.
As a side note, I created 2 zone environment, which I've shrinked to 1 zone.
POST _cluster/reroute?pretty
didn't help

Example of index template for .monitoring-kibana:

{
  "index": {
    "format": "7",
    "codec": "best_compression",
    "number_of_shards": "1",
    "auto_expand_replicas": "0-1",
    "number_of_replicas": "0"
  }
}

Yet all indexes are created with 1 replica
Other settings:
cluster.routing.rebalance.enable" : "all"

setting auto_expand_replicas to false will turn index green. What I don't understand is why this setting is not automatically adjusted for system indexes in managed cloud service?

How many shards do you have on the node? How much disk space have you got left?

74 shards, one per index + 20 yellow system ones.
plenty of space, 5-10% taken up on 2 nodes warm-hot.

Yellow status is not a problem as it just means that you have a replica shard configured which can not be allocated as you now only have one node.

I understand thank you, but I like things tidy, let's imagine a scenario where I have to prove the state of cluster is green and healthy.

If auto_expand_replicas defaults to false according to the documentation, then it seems it's an issue created by downsizing cluster from 2 zones to 1. If that's the case I guess it should be considered a bug and added to the elasticloud automation process.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.