How backup delete indices

Hi! System indexes were accidentally deleted, name: .kibana, .kibana_1, .kibana_task_manager, .security, .security-7
Log text on Elasticsearch:

[2021-06-02T17:44:32,369][DEBUG][o.e.a.s.TransportSearchAction] [AGILEVLG-SRV-23]
[.kibana_task_manager][0], node[nZb0BalCR1SI3AtEeN8AXg], [P], s[STARTED], a[id=clGtk95-
T3W3lIdqD4gUfg]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[.kibana, .kibana_1, .kibana_task_manager, .security, .security-7, logstash-pikautotesttc, logstash-pikautotesttc12, logstash-pikautotesttc12and5, logstash-pikautotesttc4, logstash-runstatus, logstash-runstatus12, logstash-runstatus12and5, logstash-runstatus4], indicesOptions=IndicesOptions[ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=, routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=0, batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, ccsMinimizeRoundtrips=true, source={"size":1,"query":{"query_string":{"query":"","fields":,"type":"best_fields","default_operator":"or","max_determinized_states":10000,"enable_position_increments":true,"fuzziness":"AUTO","fuzzy_prefix_length":0,"fuzzy_max_expansions":50,"phrase_slop":0,"analyze_wildcard":false,"escape":false,"auto_generate_synonyms_phrase_query":true,"fuzzy_transpositions":true,"boost":1.0}},"sort":[{"@timestamp":{"order":"desc"}}]}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [AGILEVLG-SRV-23][127.0.0.1:9300][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.index.query.QueryShardException: No mapping found for [@timestamp] in order to sort on
at org.elasticsearch.search.sort.FieldSortBuilder.build(FieldSortBuilder.java:319) ~[elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.sort.SortBuilder.buildSort(SortBuilder.java:153) ~[elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:772) ~[elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService.createContext(SearchService.java:608) ~[elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:583) ~[elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:386) ~[elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService.access$100(SearchService.java:124) ~[elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:358) [elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:354) [elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.search.SearchService$4.doRun(SearchService.java:1069) [elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [elasticsearch-7.1.1.jar:7.1.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.1.1.jar:7.1.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_231]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_231]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_231]

LogStash started writing logs every millisecond in this regard the memory on the PC began to fill up, I decided to re-delete the system (.kibana, .kibana_1, .kibana_task_manager, .security, .security-7) current indexes.
Is it possible to restore them somehow?
In view of all this, I get an error:

> "Kibana server is not ready yet"

Did you back them up?

If not, no. You will need to reconfigure Security and restart Kibana and configure your dashboards again.

1 Like

Thank you for your answer. Please tell me the algorithm of these actions. I understand correctly what will need to be adjusted .kibana's yml file?

You don't need to adjust anything in the config, try restarting Kibana and seeing if it creates the indices it needs.

In this case, indexes are created that are generated in the logstash configuration file, but not the system indexes.

Right. And to get back the system indices you will need to restart Kibana, and Elasticsearch./

1 Like

if I go directly to the address http://192.168.34.73:5601/, error: Kibana server is not ready yet

the employee issued the command: curl-DELETE https://elastic-search-host/.kibana*,
I think that's the problem(((((

What do your Elasticsearch and Kibana logs show after the restart?

Do you not have access control setup?

Logs in Elastic:

[2021-06-07T09:03:31,386][INFO ][o.e.e.NodeEnvironment ] [AGILEVLG-SRV-23] using [1] data paths, mounts [[(c:)]], net usable_space [23.4gb], net total_space [126.6gb], types [NTFS]
[2021-06-07T09:03:31,402][INFO ][o.e.e.NodeEnvironment ] [AGILEVLG-SRV-23] heap size [989.8mb], compressed ordinary object pointers [true]
[2021-06-07T09:03:31,620][INFO ][o.e.n.Node ] [AGILEVLG-SRV-23] node name [AGILEVLG-SRV-23], node ID [nZb0BalCR1SI3AtEeN8AXg], cluster name [elasticsearch]
[2021-06-07T09:03:31,620][INFO ][o.e.n.Node ] [AGILEVLG-SRV-23] version[7.1.1], pid[6304], build[default/zip/7a013de/2019-05-23T14:04:00.380842Z], OS[Windows Server 2012 R2/6.3/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_231/25.231-b11]
[2021-06-07T09:03:31,620][INFO ][o.e.n.Node ] [AGILEVLG-SRV-23] JVM home [C:\Program Files\Java\jre1.8.0_231]
[2021-06-07T09:03:31,620][INFO ][o.e.n.Node ] [AGILEVLG-SRV-23] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=C:\Users\BEZZBT~1\AppData\Local\Temp\elasticsearch, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Dio.netty.allocator.type=unpooled, -Delasticsearch, -Des.path.home=c:\ELK\elasticsearch-7.1.1, -Des.path.conf=c:\ELK\elasticsearch-7.1.1\config, -Des.distribution.flavor=default, -Des.distribution.type=zip, -Des.bundled_jdk=true, exit, abort, -Xms1024m, -Xmx1024m, -Xss1024k]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [aggs-matrix-stats]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [analysis-common]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [ingest-common]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [ingest-geoip]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [ingest-user-agent]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [lang-expression]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [lang-mustache]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [lang-painless]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [mapper-extras]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [parent-join]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [percolator]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [rank-eval]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [reindex]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [repository-url]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [transport-netty4]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-ccr]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-core]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-deprecation]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-graph]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-ilm]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-logstash]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-ml]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-monitoring]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-rollup]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-security]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-sql]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] loaded module [x-pack-watcher]
[2021-06-07T09:03:36,094][INFO ][o.e.p.PluginsService ] [AGILEVLG-SRV-23] no plugins loaded

Data from localhost:
{"error":{"root_cause":[{"type":"invalid_index_name_exception","reason":"Invalid index name [indicies], must not start with ''.","index_uuid":"na","index":"_indicies"}],"type":"invalid_index_name_exception","reason":"Invalid index name [indicies], must not start with ''.","index_uuid":"na","index":"_indicies"},"status":400}

Logstash:

[2021-06-07T09:03:23,347][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2021-06-07T09:03:24,626][INFO ][filewatch.observingtail ] QUIT - closing all files and shutting down.
[2021-06-07T09:03:24,688][INFO ][filewatch.observingtail ] QUIT - closing all files and shutting down.
[2021-06-07T09:03:24,704][INFO ][filewatch.observingtail ] QUIT - closing all files and shutting down.
[2021-06-07T09:03:24,724][INFO ][filewatch.observingtail ] QUIT - closing all files and shutting down.
[2021-06-07T09:03:24,735][INFO ][filewatch.observingtail ] QUIT - closing all files and shutting down.
[2021-06-07T09:03:24,735][FATAL][logstash.runner ] SIGINT received. Terminating immediately..
[2021-06-07T09:04:16,455][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-06-07T09:04:16,471][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2021-06-07T09:04:37,120][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://logstash_internal:xxxxxx@localhost:9200/]}}
[2021-06-07T09:04:37,932][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_internal:xxxxxx@localhost:9200/"}
[2021-06-07T09:04:38,073][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2021-06-07T09:04:38,089][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2021-06-07T09:04:38,167][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2021-06-07T09:04:38,182][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2021-06-07T09:04:38,260][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:38,557][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-06-07T09:04:39,495][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x4760483d run>"}
[2021-06-07T09:04:41,214][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2021-06-07T09:04:41,292][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:41,292][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:41,292][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:41,292][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:41,448][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:41,464][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:41,464][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:41,464][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2021-06-07T09:04:42,260][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2021-06-07T09:04:46,387][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:46,433][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:47,417][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:47,464][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:47,698][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:48,010][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:48,028][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:48,323][INFO ][logstash.filters.elasticsearch] New ElasticSearch filter client {:hosts=>["localhost:9200"]}
[2021-06-07T09:04:48,995][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

I can't see the Kibana logs sorry?

Please also format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

Please upgrade, 7.1 has been EOL for quite some time now.

1 Like

I can't find the kibana logs, they need to be visualized via the console referring to kibana.yml?

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
elasticsearch.logQueries: true
# Enables you to specify a file where Kibana stores log output.
logging.dest: C:\ELK\kibana-7.1.1-windows-x86_64\bin\logs\kibana.log
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: true

image

Sorry, this log in Kibana

Please don't post pictures of text or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.