Hello,
I'm very new to the community and here I am after some months of reading and configuring
Ok I'm trying to work out a problem with my new cluster, I have the following setup:
Elastic 7.13
2 Master nodes + 1 Voting only
3 Hot nodes
2 Cold Nodes
1 Kibana Node
I made a misconfiguration of one of my master nodes and it had been presented to the cluster as a data node, after that I changed the configuration however the whole cluster stopped functioning I believe due to many unassigned shards.
Seems also that my elastic user can't authenticate anymore so I went and created a tempuser on one of the master nodes.
Due to this big mess maybe since I don't have any data I want to save to reset the cluster.
I have this question to the community, what can I do in order to reset my already configured cluster to a stage where it will generate everything from start like the system indexes and passwords?
Also I would like to provide some output:
curl -k -X GET -u tempuser:tempuser "https://p1elkmaster01.prod.local:9200/_cat/health?v"
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1623135458 06:57:38 elk-prod red 8 4 4 2 0 0 42 0 - 8.7%
curl -k -X GET -u tempuser:tempuser "https://p1elkmaster01.prod.local:9200/_cluster/allocation/explain?pretty"
{
"index" : ".monitoring-kibana-7-2021.06.03",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2021-06-07T12:54:41.650Z",
"last_allocation_status" : "no_valid_shard_copy"
},
"can_allocate" : "no_valid_shard_copy",
"allocate_explanation" : "cannot allocate because all found copies of the shard are either stale or corrupt",
"node_allocation_decisions" : [
{
"node_id" : "C4rjkThJQ6egb6PKzzNx4A",
"node_name" : "p1elkdatah02.prod.local",
"transport_address" : "10.47.30.50:9300",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "false"
},
"node_decision" : "no",
"store" : {
"found" : false
}
},
{
"node_id" : "eIPj43WFRfGHfpc5qq5NQQ",
"node_name" : "p4elkdatac01.prod.local",
"transport_address" : "10.47.30.57:9300",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "false"
},
"node_decision" : "no",
"store" : {
"found" : false
}
},
{
"node_id" : "lQoBjVT4STi89hh0LZx5gA",
"node_name" : "p1elkdatah01.prod.local",
"transport_address" : "10.47.30.49:9300",
"node_attributes" : {
"xpack.installed" : "true",
"transform.node" : "false"
},
"node_decision" : "no",
"store" : {
"found" : false
}
},
{
"node_id" : "wVg87t4VTH-IMNoQJdK4eA",
"node_name" : "p1elkdatac01.prod.local",
"transport_address" : "10.47.30.56:9300",
"node_attributes" : {
"ml.machine_memory" : "30704427008",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true",
"transform.node" : "true"
},
"node_decision" : "no",
"store" : {
"in_sync" : false,
"allocation_id" : "37glbvDeR52MQ2-jjp4keg"
}
}
]
}
curl -k -X GET -u tempuser:tempuser "https://p1elkmaster01.prod.local:9200/_cat/shards?v"
index shard prirep state docs store ip node
logstash-2021.06.07 0 r UNASSIGNED
logstash-2021.06.07 0 p UNASSIGNED
.kibana-event-log-7.10.2-000003 0 p UNASSIGNED
logs-index_pattern_placeholder 0 p STARTED 0 208b 10.47.30.56 p1elkdatac01.prod.local
logs-index_pattern_placeholder 0 r UNASSIGNED
logstash-2021.06.04-000006 0 p STARTED 0 208b 10.47.30.57 p4elkdatac01.prod.local
logstash-2021.06.04-000006 0 r UNASSIGNED
logstash-2021.06.06-000008 0 p STARTED 0 208b 10.47.30.56 p1elkdatac01.prod.local
logstash-2021.06.06-000008 0 r UNASSIGNED
logstash-2021.06.08-000009 0 p STARTED 0 208b 10.47.30.56 p1elkdatac01.prod.local
logstash-2021.06.08-000009 0 r UNASSIGNED
.monitoring-es-7-2021.06.05 0 p UNASSIGNED
.kibana-event-log-7.10.2-000005 0 p UNASSIGNED
ilm-history-3-000004 0 p UNASSIGNED
.transform-internal-005 0 p UNASSIGNED
.monitoring-es-7-2021.06.06 0 p UNASSIGNED
metrics-endpoint.metadata_current_default 0 p UNASSIGNED
.monitoring-kibana-7-2021.06.05 0 p UNASSIGNED
.monitoring-kibana-7-2021.06.06 0 p UNASSIGNED
.monitoring-es-7-2021.06.07 0 p UNASSIGNED
logstash-2021.06.05-000007 0 p STARTED 0 208b 10.47.30.56 p1elkdatac01.prod.local
logstash-2021.06.05-000007 0 r UNASSIGNED
.apm-custom-link 0 p UNASSIGNED
.kibana_security_session_1 0 p UNASSIGNED
logstash-2021.06.03-000005 0 p STARTED 206249 35.5mb 10.47.30.57 p4elkdatac01.prod.local
logstash-2021.06.03-000005 0 r UNASSIGNED
.async-search 0 p UNASSIGNED
.monitoring-es-7-2021.06.02 0 p UNASSIGNED
ilm-history-3-000002 0 p UNASSIGNED
.apm-agent-configuration 0 p UNASSIGNED
metrics-index_pattern_placeholder 0 p STARTED 0 208b 10.47.30.56 p1elkdatac01.prod.local
metrics-index_pattern_placeholder 0 r UNASSIGNED
.monitoring-kibana-7-2021.06.02 0 p UNASSIGNED
ilm-history-3-000003 0 p UNASSIGNED
.kibana_1 0 p UNASSIGNED
.monitoring-kibana-7-2021.06.07 0 p UNASSIGNED
.kibana-event-log-7.10.2-000004 0 p UNASSIGNED
.monitoring-es-7-2021.06.04 0 p UNASSIGNED
.kibana_task_manager_1 0 p UNASSIGNED
.transform-notifications-000002 0 p UNASSIGNED
.monitoring-kibana-7-2021.06.03 0 p UNASSIGNED
.monitoring-es-7-2021.06.03 0 p UNASSIGNED
.kibana-event-log-7.10.2-000002 0 p UNASSIGNED
.monitoring-kibana-7-2021.06.04 0 p UNASSIGNED
ilm-history-3-000005 0 p UNASSIGNED
.monitoring-es-7-2021.06.08 0 p STARTED 160 7.7mb 10.47.30.56 p1elkdatac01.prod.local
curl -k -X GET -u tempuser:tempuser "https://p1elkmaster01.prod.local:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open logstash-2021.06.08-000009 gny1VrocTH-Hsl6PjaCJCQ 1 1 0 0 208b 208b
yellow open logstash-2021.06.05-000007 IpVUfRaPQgCiz4WUcbIwdg 1 1 0 0 208b 208b
yellow open logstash-2021.06.06-000008 ObtbmAkPRDGXm4WR0oyBKw 1 1 0 0 208b 208b
yellow open logstash-2021.06.03-000005 zgF_I9CaT76gdK7LHgB3xg 1 1 206249 0 35.5mb 35.5mb
yellow open logstash-2021.06.04-000006 O9O5-gW9Q6izP_-vtnbP7g 1 1 0 0 208b 208b
red open .monitoring-es-7-2021.06.02 qYu3USslRRyrQTI_7TtBHA 1 0
red open .apm-custom-link 3xIUQjVAS-eR_2aGGw3JKw 1 0
yellow open logs-index_pattern_placeholder FIkdyG0lTcuH2lozOBE7LA 1 1 0 0 208b 208b
red open .kibana_task_manager_1 WdVPr0OwSEWXbnE1WFC52Q 1 0
red open .monitoring-es-7-2021.06.07 mEWtz8PqRjy_5PZG698UZw 1 0
green open .monitoring-es-7-2021.06.08 Yc6Ea4VvSSuXuTnPtZPymg 1 0 160 88 7.6mb 7.6mb
red open .monitoring-es-7-2021.06.05 QF0w1QzpSiiis846k5oi4A 1 0
red open .monitoring-es-7-2021.06.06 bEQtFyxXTh-33sBtX7j6tw 1 0
red open .monitoring-es-7-2021.06.03 R-R0TrvYSIuEa-aYqskT-Q 1 0
red open .monitoring-es-7-2021.06.04 8RHEQF9oQK6V96VxAYDYAA 1 0
red open .monitoring-kibana-7-2021.06.07 cGsKxDg7SOyqvjq7d3GAAQ 1 0
red open .monitoring-kibana-7-2021.06.06 qEvaYYX5T-qwZWf8z8gzeg 1 0
red open .monitoring-kibana-7-2021.06.05 6_9Y75CuS9ivofy-KB_ElA 1 0
red open .apm-agent-configuration W_g5mBHOTKGuoXYAyq6WNQ 1 0
red open .transform-internal-005 N9EtQZEgRI-Kk6nWX4uleg 1 0
red open .monitoring-kibana-7-2021.06.04 CLqPUPToTDuZT1swS8aQ9A 1 0
red open .kibana_1 nQXAn-rjQPujtYdEdusDCg 1 0
red open .monitoring-kibana-7-2021.06.03 rkYjgWPnT1m5XlgykZYFYA 1 0
red open .monitoring-kibana-7-2021.06.02 A73A0puhSmisSzz55iM4AQ 1 0
red open metrics-endpoint.metadata_current_default qlpjXwjYSiSqqmqitenOyQ 1 0
red open .kibana-event-log-7.10.2-000004 k8HAVl6-ROCC5367KV2Riw 1 0
red open .kibana-event-log-7.10.2-000005 wzP50ye3RFqFd71A12tRDg 1 0
yellow open metrics-index_pattern_placeholder vHDKEhW_TsWcSog-bthyQw 1 1 0 0 208b 208b
red open .async-search s3Ev4oQrS_-1mmuo9utxJA 1 0
red open logstash-2021.06.07 BnXfyEgvQseQwsYE9YbkUQ 1 1
red open .kibana-event-log-7.10.2-000002 H3tsXo4eTyu5MCxW82DzQg 1 0
red open .kibana-event-log-7.10.2-000003 8AWoPgCeTNWqwvO8W6TZug 1 0