health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
red open .siem-signals-default-000001 BhEluMC5R3O-1UAOEXNjsA 1 1
red open .siem-signals-default-000002 3KWe-ir6RYqleiCzpjuYJg 1 1
red open .items-default-000001 I4W5BpmmRAmENBj1gaWgTQ 1 1
yellow open cf_http-2023-05-12 BzfKRxqXQNuydXlP29uf0A 1 1 11283152 0 30.4gb 30.4gb
yellow open cf_http-2023-05-13 KkRwYjzTTV210AW97OrZHA 1 1 10963917 0 29.6gb 29.6gb
yellow open cf_http-2023-05-14 38gVUUdiTSasH-0brxKeAg 1 1 10100763 0 27.2gb 27.2gb
yellow open cf_http-2023-05-15 9WwqG5IWSwShflS-eXYiMg 1 1 9671741 0 26.1gb 26.1gb
yellow open cf_http-2023-05-10 9e_Sg58FTXazTU8tTUCKZQ 1 1 14457458 0 37.3gb 37.3gb
yellow open cf_http-2023-05-11 WyZLo7OxTM2sjwBZSVkP_g 1 1 11294386 0 30.5gb 30.5gb
yellow open cf_http-2023-05-16 ZbZmILzUQwCjB_p6hgXa_g 1 1 9532518 0 25.6gb 25.6gb
yellow open cf_http-2023-05-17 QOzmmB_TQF-pOZbJfcu-Eg 1 1 9441068 0 25.4gb 25.4gb
yellow open cf_http-2023-05-18 UStSBmDHRmiUj8kWPwbVlQ 1 1 9350414 0 25gb 25gb
yellow open cf_http-2023-05-19 4BUXtylyTkum_QO3gyEb9Q 1 1 9885463 0 26.5gb 26.5gb
green open .monitoring-es-7-2023.05.20 6XMbXQYJSSa9UjDdVE27mQ 1 0 537029 375300 299.7mb 299.7mb
green open .monitoring-es-7-2023.05.21 CN2by8TfRz69SWKHqWCkEg 1 0 545881 393954 309.5mb 309.5mb
green open .monitoring-es-7-2023.05.24 HCx8HYl7RTqkS0EzZaNDww 1 0 571985 33810 304.1mb 304.1mb
green open .monitoring-es-7-2023.05.25 vXKUAm40Rq6KtbL6TaS1Yw 1 0 580276 51554 308.7mb 308.7mb
green open .monitoring-es-7-2023.05.22 kIDgCjWsTnyAsnk2Yg8hEQ 1 0 554392 412126 315.2mb 315.2mb
green open .monitoring-es-7-2023.05.23 FYbchlz6Qjm7BbYWjLjebg 1 0 562834 16675 298mb 298mb
green open .monitoring-es-7-2023.05.26 pDTO7vzURoqvKW4-Sp5o9Q 1 0 587087 64930 314.2mb 314.2mb
green open .monitoring-es-7-2023.05.27 yEX548AlTMarN27IYqlv0Q 1 0 200350 23698 54.1mb 54.1mb
red open cf_http-2023-05-01 rsBQcEr8QWmAUXHCxvLMNQ 1 1
yellow open cf_http-2023-05-02 WajoPx6hSY2dv7STRN8LtA 1 1 13920382 0 31.3gb 31.3gb
red open cf_http-2023-04-30 jWwRcnJmS2quk_Q7ojoqcg 1 1
red open cf_http-2023-05-03 klBSHCWyQG-wFozFDdUFag 1 1
red open cf_http-2023-05-04 pzKrRRHpTguMdYfcWuObVg 1 1
yellow open cf_http-2023-05-09 Op-6odsZQdS-vVckZxnMuA 1 1 2010508 0 4.5gb 4.5gb
red open cf_http-2023-05-05 o3ByeljATImygRIkpbnSHw 1 1
red open .fleet-policies-7 2r0eUwo4TSGwlbgyCWarTw 1 0
red open cf_http-2023-05-06 bLgNcFHYSH-Vj7y66-DbxA 1 1
red open cf_http-2023-05-07 y3tQQmWiRO6E5Enigjq2TQ 1 1
red open .metrics-endpoint.metadata_united_default BGe68IHwQPqa-s6kil70vQ 1 0
yellow open cf_http-2023-05-08 ZjBvR0EJR2GkQS0acRdMXw 1 1 11685460 0 26.2gb 26.2gb
green open .kibana_task_manager_7.17.2_001 Lp2MDQmaQbK1dmytD4ndaA 1 0 18 525 102.8kb 102.8kb
red open .kibana_7.17.2_001 aIleKa3GSwa6TEK1xDrgSA 1 0
yellow open cf_http-2023-04-21 -EiHodLuSvOYBmUEA9mSXg 1 1 68328067 0 154.8gb 154.8gb
red open cf_http-2023-04-20 62ElZXxvTcKSfjpE5WDvtg 1 1
yellow open cf_http-2023-04-24 Ub-fxfHWQ22caIg_fa2G_g 1 1 25035372 0 56.7gb 56.7gb
yellow open cf_http-2023-04-28 VBBgL13ZQPWsIl0CNpYHXg 1 1 19507590 0 44gb 44gb
green open .transform-internal-007 XyT6oWZPQUau0Qxxn9O93w 1 0 12 7 78kb 78kb
red open .lists-default-000001 42W-XczQTf2z2eLNwL3m5g 1 1
green open .monitoring-kibana-7-2023.05.23 gVDqg65RSUWbl9JaCU8Scw 1 0 17280 0 3.6mb 3.6mb
red open metrics-endpoint.metadata_current_default IgOvTNOAQNqTwXFbUpNc_A 1 0
yellow open cf_http-2023-05-23 QaQo2iN4TKSrpA76eGcG6Q 1 1 37516121 0 102.6gb 102.6gb
green open .monitoring-kibana-7-2023.05.22 6TCsRhQ2RGefmMW18Xm0UQ 1 0 17280 0 3.6mb 3.6mb
yellow open cf_http-2023-05-24 oJMH9FRCRDO1ucIH_AEMqg 1 1 60072553 0 163.5gb 163.5gb
green open .monitoring-kibana-7-2023.05.21 XUtgdhrhSoWcaHnVrnC32Q 1 0 17278 0 3.5mb 3.5mb
yellow open cf_http-2023-05-25 Wjn87cQbTGG3spTXlL54cg 1 1 44819503 0 122.7gb 122.7gb
green open .monitoring-kibana-7-2023.05.20 D555wOfsSBeq9HToP8khPQ 1 0 17278 0 3.3mb 3.3mb
yellow open cf_http-2023-05-26 yWffxAHaQeGfCEhwe1kbKw 1 1 46868527 0 128.3gb 128.3gb
yellow open cf_http-2023-05-20 bKBO_MdSSOKOHwyWF3VP1A 1 1 9730534 0 26.1gb 26.1gb
yellow open cf_http-2023-05-21 EZkwZ04zQGO-CRHH2BeiCA 1 1 38507468 0 105.7gb 105.7gb
yellow open cf_http-2023-05-22 R71k7MDYRC6E-ER9weV6bQ 1 1 48289529 0 132gb 132gb
yellow open cf_http-2023-04-15 NJV0MX9oT8GNZv8dWAXiLw 1 1 59439724 0 134.9gb 134.9gb
green open .monitoring-kibana-7-2023.05.27 4nLHtZr7TOilskvOcpLy8Q 1 0 4318 0 1mb 1mb
green open .monitoring-kibana-7-2023.05.26 -OS_nWdCQvugQcqln5l1ww 1 0 17222 0 3.6mb 3.6mb
green open .monitoring-kibana-7-2023.05.25 xJstPueGS629odwCuRqt2A 1 0 17280 0 3.7mb 3.7mb
green open .monitoring-kibana-7-2023.05.24 IS17YSbnTXy3TgpqZiZ0sA 1 0 17278 0 3.5mb 3.5mb
restarting kibana brings that error:
[ERROR][plugins.alerting.monitoring_alert_disk_usage] Executing Rule default:monitoring_alert_disk_usage:a6ce4d60-eedb-11ed-8583-45292c39ddfa has resulted in Error: security_exception: [security_exception] Reason: unable to authenticate with provided credentials and anonymous access is not allowed for this request, caused by: "" - Error: security_exception: [security_exception] Reason: unable to authenticate with provided credentials and anonymous access is not allowed for this request
Restarting Elasticsearch brings that error
o.e.x.s.a.RealmsAuthenticator] [ELASTICSEARCH-NODE-1] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
Tried to reset Elastic and Kibana passwords but no luck. Any idea what is a possible cause for that?
I changed the Title to something more accurate/ helpful.
Restoring is the best approach...
Perhaps with the better title you might get more suggestions.
It is not clear to me how to recreate the index...
Did you try to reset the elastic password?
Also did you stop and start elasticsearch? Then try resetting the elastic password
This is the danger of a single node cluster... It looks like you have lost other indices as well... do you have any idea how you lost it in the first place?
I managed to restore the index by upgrading Elasticsearch and Kibana. Now the above errors are gone and only kibana is still throwing that error
[ERROR][savedobjects-service] [.kibana] Action `failed with 'no_shard_available_action_ex>
May 28 06:30:25 ELASTICSEARCH-NODE-1 kibana[882706]: Root causes: no_shard_available_action_exception
checking the status of the shards i can see that there still UNASSIGNED PRIMARY shards, reason: CLUSTER_RECOVERED
.geoip_databases 0 p UNASSIGNED CLUSTER_RECOVERED
.tasks 0 p UNASSIGNED CLUSTER_RECOVERED
.kibana_security_session_1 0 p UNASSIGNED CLUSTER_RECOVERED
.kibana-event-log-7.17.2-000011 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-12 0 r UNASSIGNED CLUSTER_RECOVERED
.reporting-2023-04-23 0 p UNASSIGNED CLUSTER_RECOVERED
.reporting-2022-06-19 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-25 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-06 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-06 0 r UNASSIGNED CLUSTER_RECOVERED
.reporting-2023-03-12 0 p UNASSIGNED CLUSTER_RECOVERED
.fleet-policies-7 0 p UNASSIGNED CLUSTER_RECOVERED
.siem-signals-default-000001 0 p UNASSIGNED CLUSTER_RECOVERED
.siem-signals-default-000001 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-19 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-04-20 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-04-20 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-04-24 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-24 0 r UNASSIGNED CLUSTER_RECOVERED
.kibana-event-log-7.17.2-000013 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-05 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-05 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-16 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-04-19 0 r UNASSIGNED CLUSTER_RECOVERED
.items-default-000001 0 p UNASSIGNED CLUSTER_RECOVERED
.items-default-000001 0 r UNASSIGNED CLUSTER_RECOVERED
.async-search 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-07 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-07 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-14 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-10 0 r UNASSIGNED CLUSTER_RECOVERED
.reporting-2022-04-24 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-04 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-04 0 r UNASSIGNED CLUSTER_RECOVERED
.transform-notifications-000002 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-18 0 r UNASSIGNED CLUSTER_RECOVERED
.security-profile-8 0 p UNASSIGNED CLUSTER_RECOVERED
.metrics-endpoint.metadata_united_default 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-.slm-history-5-2023.04.28-000001 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-02 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-13 0 r UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2023.03.29-000023 0 p UNASSIGNED CLUSTER_RECOVERED
.reporting-2023-03-19 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-01 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-01 0 r UNASSIGNED CLUSTER_RECOVERED
.lists-default-000001 0 p UNASSIGNED CLUSTER_RECOVERED
.lists-default-000001 0 r UNASSIGNED CLUSTER_RECOVERED
.fleet-enrollment-api-keys-7 0 p UNASSIGNED CLUSTER_RECOVERED
.kibana_7.17.2_001 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-.logs-deprecation.elasticsearch-default-2023.04.28-000025 0 p UNASSIGNED CLUSTER_RECOVERED
.apm-agent-configuration 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-03 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-03 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-21 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-15 0 r UNASSIGNED CLUSTER_RECOVERED
metrics-endpoint.metadata_current_default 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-22 0 r UNASSIGNED CLUSTER_RECOVERED
.reporting-2023-01-29 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-26 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-04-30 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-04-30 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-23 0 r UNASSIGNED CLUSTER_RECOVERED
.reporting-2023-02-05 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-17 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-09 0 r UNASSIGNED CLUSTER_RECOVERED
.apm-custom-link 0 p UNASSIGNED CLUSTER_RECOVERED
.reporting-2023-02-26 0 p UNASSIGNED CLUSTER_RECOVERED
.ds-ilm-history-5-2023.01.28-000018 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-04-28 0 r UNASSIGNED CLUSTER_RECOVERED
.siem-signals-default-000002 0 p UNASSIGNED CLUSTER_RECOVERED
.siem-signals-default-000002 0 r UNASSIGNED CLUSTER_RECOVERED
.reporting-2022-12-18 0 p UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-11 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-20 0 r UNASSIGNED CLUSTER_RECOVERED
cf_http-2023-05-08 0 r UNASSIGNED CLUSTER_RECOVERED
.kibana_alerting_cases_8.8.0_reindex_temp 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-es-7-2023.05.27 0 p STARTED ELASTICSEARCH-NODE-1
.reporting-2022-06-26 0 p STARTED ELASTICSEARCH-NODE-1
.reporting-2022-08-14 0 p STARTED ELASTICSEARCH-NODE-1
.kibana-event-log-7.17.2-000012 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-kibana-7-2023.05.23 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-es-7-2023.05.26 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_ingest_8.8.0_reindex_temp 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-kibana-7-2023.05.27 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-12 0 p STARTED ELASTICSEARCH-NODE-1
.security-7 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-25 0 p STARTED ELASTICSEARCH-NODE-1
.reporting-2023-03-05 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_ingest_8.8.0_001 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_analytics_8.8.0_reindex_temp 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-19 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_security_solution_8.8.0_001 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-04-24 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-24 0 p STARTED ELASTICSEARCH-NODE-1
.kibana-event-log-8.6.2-000001 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-16 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-es-7-2023.05.22 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_task_manager_8.6.2_001 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-es-7-2023.05.24 0 p STARTED ELASTICSEARCH-NODE-1
.reporting-2023-04-02 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-04-19 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-14 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-kibana-7-2023.05.28 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-10 0 p STARTED ELASTICSEARCH-NODE-1
.kibana-event-log-7.17.2-000014 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-kibana-7-2023.05.22 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-18 0 p STARTED ELASTICSEARCH-NODE-1
.ds-ilm-history-5-2023.02.27-000020 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-02 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-13 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_8.8.0_001 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-kibana-7-2023.05.25 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-kibana-7-2023.05.26 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_task_manager_7.17.2_001 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_8.8.0_reindex_temp 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-es-7-2023.05.28 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-es-7-2023.05.23 0 p STARTED ELASTICSEARCH-NODE-1
.ds-ilm-history-5-2023.03.29-000022 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-21 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-15 0 p STARTED ELASTICSEARCH-NODE-1
.ds-ilm-history-5-2023.04.28-000024 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-es-7-2023.05.25 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-22 0 p STARTED ELASTICSEARCH-NODE-1
.transform-internal-007 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-26 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-23 0 p STARTED ELASTICSEARCH-NODE-1
.monitoring-kibana-7-2023.05.24 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-17 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-09 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_security_solution_8.8.0_reindex_temp 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_analytics_8.8.0_001 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-04-28 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_8.6.2_001 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-11 0 p STARTED ELASTICSEARCH-NODE-1
cf_http-2023-05-20 0 p STARTED ELASTICSEARCH-NODE-1
.kibana_alerting_cases_8.8.0_001 0 p STARTED ELASTICSEARCH-NODE-1
Then run allocation explain on one of the regular indices
GET _cluster/allocation/explain?pretty
{
"index": "cf_http-2023-05-06",
"shard": 0,
"primary": true
}
Then run the same on the actual index the the .kibana alias is pointing to.
Also when you look in the elasticsearch logs do see errors like
Caused by: org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read /var/lib/elasticsearch/nodes/0/indices/3mUO113ES22qeEDguuH-VA/0/_state/state-23.st
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:159) ~[elasticsearch-7.16.3.jar:7.16.3]
Most likely the actual elasticsearch data is corrupt from some severe issue, you will probably need to recover from a snapshot unless there is some unusual allocation setting (doubtful)
We my might be able to fix the .kibana index issue... But I suspect the normal indices going to be bigger problem.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.