Accident deletion of kibana index but visualization is still intact

Hello

We accidently applied ILM policy to system indexes and it deleted (?) kibana index
we are still able to see saved spaces in visualization but can not create new index pattern and can not make any new changes to existing visualization ( spaces). we see forbidden error

Please let know if there is any way to fix forbidden error and move forward with index pattern .

we are looking for available options without loosing elk data and visualization spaces.

Elasticsearch and Kibana are on latest 7.4.X version

we see below messages in elastic search node while creating index pattern

machine_memory=12187295744, ml.max_open_jobs=20, xpack.installed=true} [SENT_APPLY_COMMIT]
[2019-10-29T16:50:22,545][INFO ][o.e.c.m.MetaDataIndexTemplateService] [cd] adding template [.management-beats] for index patterns [.management-beats]
[2019-10-29T16:53:53,604][INFO ][o.e.c.m.MetaDataIndexTemplateService] [cdl] adding template [.management-beats] for index patterns [.management-beats]
[2019-10-29T17:08:01,007][INFO ][o.e.c.m.MetaDataIndexTemplateService] [cdldv] adding template [.management-beats] for index patterns [.management-beats]
[2019-10-29T17:08:12,253][INFO ][o.e.c.m.MetaDataIndexTemplateService] [cdld] adding template [.management-beats] for index patterns [.management-beats]

we also see below error messages

    at java.lang.Thread.run(Thread.java:830) [?:?]

[2019-10-29T18:44:14,381][WARN ][o.e.x.m.e.l.LocalExporter] [cdldvcd] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: NodeClosedException[node closed {c}{KKTB8ts4StuMsKmYPbeG4g}{Ooc74dVlRKKgqdv2G2Z6Xw}{cdldm}{11.16.116.223:9300}{ilm}{ml.machine_memory=12187295744, xpack.installed=true, ml.max_open_jobs=20}]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:125) ~[x-pack-monitoring-7.4.0.jar:7.4.0]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]

we are basically looking how can we fix these errors and move forward since we are able to see data coming to elasticsearch and then to kibana dashboard with no issues.

After realizing accidental assigning of ILM to all indixes , i have removed them.
at this stage we dont know how many system indexes were deleted.

Appreciate your help .

[2019-10-29T13:34:16,234][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [o-sql-test1/wZYogw3KQ_aFIOF5AHkRDA] deleting index
[2019-10-29T13:34:16,826][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [o-alert/opKA8N_HQKuCp87Ck-n_7A] deleting index
[2019-10-29T13:34:18,218][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [o-2019.09.11/q-8Csi9OSJ23Pmy6cMEx7g] deleting index
[2019-10-29T13:34:18,690][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [o-2019.09.12/G-df9H_fRo-R2K18cAHyKA] deleting index
[2019-10-29T13:44:16,093][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [g-logs-2019.09.25/_2dTjvP9REucqBvBIELwvQ] deleting index
[2019-10-29T13:44:16,309][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [g-logs-2019.09.26/uPd1q-nXTZCgceKASZ68vg] deleting index
[2019-10-29T13:44:16,775][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [o-test/xYifV4tQRpWhYivOn3jk_A] deleting index
[2019-10-29T13:44:17,000][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [o-database-metrics/3hEaZSr_RxGdowQO2dotsg] deleting index
[2019-10-29T13:54:16,129][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [g-logs/iGXlFWgAST6zsk00I6rZDw] deleting index
[2019-10-29T13:54:16,356][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [g-logs-2019.09.24/_ckcp8ALQlGwMx1UBE2-aQ] deleting index
[2019-10-29T14:04:22,931][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [g-logs-2019.09.27/rtG2YppjSDyCPju_muJ5qw] deleting index
[2019-10-29T14:34:17,583][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [active-session-ora-test/Ygf6lPAJQX25aSO6ueNOTg] deleting index
[2019-10-29T14:44:16,084][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [.kibana_1/j9iZKJDYQ9mZSRmQLVw9rg] deleting index
[2019-10-29T14:44:16,229][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [filebeat-7.3.1/Mx_dgtdISMSIgxXfjbdHMw] deleting index
[2019-10-29T14:44:16,398][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [testm-buffer-cache-hit-ratio/8NlmVu4vS0-vShtCNU_6hQ] deleting index
[2019-10-29T14:44:16,582][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [logstash-2019.09.09-000001/VgYLxRRtT9aap5dM3RjWTg] deleting index
[2019-10-29T14:44:16,758][INFO ][o.e.c.m.MetaDataDeleteIndexService] [server1] [filebeat-7.3.2/oaoLcT9DRzaOq_CnPgSvOg] deleting index

while trying to make further admin activities , observed below error

[cluster_block_exception] index [.security-7] blocked by: [FORBIDDEN/8/index write (api)];

so this says , all admin activities are trying to write on this index which is not allowing.
this index settings are as below. i have tied refresh and recovery option from kibana devtools api area

{
".security-7" : {
"settings" : {
"index" : {
"lifecycle" : {
"name" : ""
},
"number_of_shards" : "1",
"auto_expand_replicas" : "0-1",
"blocks" : {
"read_only_allow_delete" : "false",
"read_only" : "false",
"write" : "true"
},
"provided_name" : ".security-7",
"format" : "6",
"creation_date" : "1570068759097",
"analysis" : {
"filter" : {
"email" : {
"type" : "pattern_capture",
"preserve_original" : "true",
"patterns" : [
"([^@]+)",
"(\p{L}+)",
"(\d+)",
"@(.+)"
]
}
},
"analyzer" : {
"email" : {
"filter" : [
"email",
"lowercase",
"unique"
],
"tokenizer" : "uax_url_email"
}
}
},
"priority" : "1",
"number_of_replicas" : "1",
"uuid" : "6M6B9Cu-QPWLAIeaNzmS5g",
"version" : {
"created" : "7030299"
}
}
}
}
}

@rashmi could you please share your thoughts on this issue.

Can you check the index settings for the .kibana_x indices in Index Management? It might be that you just made them read-only if no visualization or dashboards disappeared.

Thanks for your response @Marius_Dragomir , i have checked at , kibana indice was deleted accidentally and no place to check if that was in read only mode. but i tried to make all indices read write mode.

what i did was , i exported all visualizations to a test invironment, rebuilt the cluster and imported back. we lost the data but since able to use visualizations. since this is dev environment we can take data loss.

That sounds like the proper way of recovery, I don't think I can offer any more help for that. I will also add an enhancement request so that we add an extra warning in ILM when it would impact system indices. It should help avoid these cases.

Thanks @Marius_Dragomir , please request for enhancement. ILM should not be applied for system indeces.

One item i was interested to find was how kibana was able to disaplay visualization when index was removed. It was not in memory since i have restated kibana multiple times. It was able to store visualizations/dashboard information somewhere , i was trying to find if that is second kibana index and use that to create alias of kibana system index.

if you send a "GET _cat/indices" request in the dev tools it will show all the indices. It should also show if there was a .kibana_1 or .kibana_2 index.

Right at that time ( i mean before restore), there was only .kibana_2 index which was in read write mode but didn't allow commits on visualization.

so this was the setting on the .kibana2 index? "index.blocks.read_only_allow_delete": null

yes , that was correct @Marius_Dragomir

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.