Multiple kibana indices in recovered snapshot

Hi team,

I have recovered a snapshot with multiple kibana indices, please see kibana dedicated indices in the snapshot summary, using GET _snapshot/rep_2/monthly-snapshot-2022.09.01?pretty

 {
          "feature_name" : "kibana",
          "indices" : [
            ".kibana_7.16.3_001",
            ".kibana_task_manager_2",
            ".apm-custom-link",
            ".kibana_task_manager_1",
            ".apm-agent-configuration",
            ".kibana_5",
            ".kibana_task_manager_7.16.3_001",
            ".kibana_2",
            ".kibana_1",
            ".kibana_4",
            ".kibana_3"
          ]
        },

Could you help me to understand how to integrate those indices into kibana dashboard?
Eventhough the indices are available/open on ES now, there is no change no the kibana.
My kibana.yml is the following:

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#data.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#data.autocomplete.valueSuggestions.terminateAfter: 100000


--------------------------

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
#server.host: ["http://10.116.38.205"]
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries
elasticsearch.hosts: ["http://10.116.38.205:9200"]
#server.defaultRoute: "/app/dashboard"
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
server.publicBaseUrl: "http://10.116.38.205:5601"
# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# If may use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

Any help would be appreciated.
Many thanks!

Hi @not_correct .

It could be that your .kibana index alias is not pointing at the restored index.

Could you run GET .kibana/_alias from dev tools or another API client to see what index Kibana is using?

Hi,
Tahnk you so much for your reply

I get:

{
  ".kibana_7.16.3_001" : {
    "aliases" : {
      ".kibana" : { },
      ".kibana_7.16.3" : { }
    }
  }
}

so I added to .kibana_1 alias .kibana and the dashbords been added, but unfortunately to default space. However all my indices are located in the custom space. Is there a way to point dashboards aliases to a specific space?
I appreciate your help

all my indices are located in the custom space

To be clear—indices are an Elasticsearch concept and they are accessible in Kibana regardless of which space you're in. I think what you are probably talking about are Kibana data views which define how Kibana accesses Elasticsearch data (doc). Data views can be specific to a Kibana space.

If you want to move your dashboards (and/or the data views they are using) to another Kibana space, you can export them from the default space under Stack Management -> Saved Objects

That downloads a file of your Kibana data. Switch spaces, and import that file into the new space using the same interface

Does that help?

Hello,
Thank you so much for your answer.
You help a lot!
I am wondering if I can read somewhere about saved object anatomy?
Somehow after restoring a snapshot the dashboards data is not visible. The message says the index does not exist. There is an issue with restored indexes, but I will open a separate discussion for that. One of the solutions I consider is to remove all old data but keeping the kibana queries. The system is in production, and it focuses on real time visualization. To do that I'd like to be able to review the dashboard settings available in saved objects.

The message says the index does not exist.

Can you post a screenshot of this? ^^

I am wondering if I can read somewhere about saved object anatomy?

Saved objects are technically just documents in Elasticsearch (you can read more here). You can inspect their contents from the saved object management page.

Hi,
Thank you so much for your prompt help.
The issue with unavailable indices was related to the fact that volume was not attached to EC2 instance, so it is resolved.

Many thanks for your help once again, it is really appreciated.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.