Hello together,
right now we have a 7 Node Cluster with Version 6.3.0. One Node is only for kibana. Every other Node hast the following options in elasticsearch.yml:
node.master: True
node.data: True
node.ingest: True
These are my _cluster/settings:
GET _cluster/settings
{
"persistent": {
"xpack": {
"monitoring": {
"collection": {
"enabled": "true"
}
}
}
},
"transient": {
"cluster": {
"routing": {
"allocation": {
"disk": {
"watermark": {
"low": "92%",
"flood_stage": "99%",
"high": "97%"
}
}
}
},
"info": {
"update": {
"interval": "1m"
}
}
},
"discovery": {
"zen": {
"minimum_master_nodes": "4"
}
}
}
}
Since a couple of weeks i have a red cluster cause a lot of shards do not have a valid shard copy.
GET /_cat/shards?v&h=index,shard,prirep,state,unassigned.reason&s=state
index shard prirep state unassigned.reason
.monitoring-logstash-6-2019.02.25 0 p UNASSIGNED DANGLING_INDEX_IMPORTED
.monitoring-logstash-6-2019.02.25 0 r UNASSIGNED DANGLING_INDEX_IMPORTED
.monitoring-es-6-2019.02.21 0 p UNASSIGNED DANGLING_INDEX_IMPORTED
.monitoring-es-6-2019.02.21 0 r UNASSIGNED DANGLING_INDEX_IMPORTED
There are many more. All of them have been deleted weeks ago from curator. First i tried to delete these indices with:
DELETE .monitoring-logstash-6-2019.02.25,.monitoring-logstash-6-2019.02.25,.monitoring-es-6-2019.02.21,.monitoring-es-6-2019.02.21
But this seems to be an endless story. Do anyone know how to find the root cause and fix it?
Regards,
Christian