We couldn't log you in. Please try again

image

After I do the Snapshot, I tried to delete all indices to test the Snapshot and Restore. At first I thought it's working coz' I can see the Kibana is working not until I tried to log in. I can't log in coz' there is missing on my shards "kibana_security_session_1".

I found in logs these errors

{"type":"error","@timestamp":"2022-09-13T21:55:11+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:11+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:12+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:12+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:12+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:12+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:13+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:13+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"error","@timestamp":"2022-09-13T21:55:13+10:00","tags":["connection","client","error"],"pid":33874,"level":"error","error":{"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","name":"Error","stack":"Error: 140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n","code":"ERR_SSL_SSLV3_ALERT_CERTIFICATE_UNKNOWN"},"message":"140701074941952:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1546:SSL alert number 46\n"}
{"type":"log","@timestamp":"2022-09-13T21:55:14+10:00","tags":["error","plugins","security","session","index"],"pid":33874,"message":"Failed to clean up sessions: search_phase_execution_exception: "}
{"type":"log","@timestamp":"2022-09-13T21:55:14+10:00","tags":["error","plugins","taskManager"],"pid":33874,"message":"Task session_cleanup \"session_cleanup\" failed: {\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[],\"caused_by\":{\"type\":\"search_phase_execution_exception\",\"reason\":\"Search rejected due to missing shards [[.kibana_security_session_1][0]]. Consider using `allow_partial_search_results` setting to bypass this error.\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]}},\"status\":503}"}

Can someone help me to get back the missing shards? I highly appreciated your help. Thanks All! :slight_smile:

@wonderland14 - if you deleted all indices and trying to restore the entire cluster from snapshot, you may want to check this documentation: Restore an entire cluster (I am assuming you are using version 7.x).

1 Like

yes, this is exactly what I followed. But seems the index kibana_security_session_1 is not included on the snapshot after I restored.

but at this time, I saw this after I used GET command . Does it means that the index kibana_security_session_1 is not deleted?

{".kibana_security_session_1":{"aliases":{},"mappings":{"dynamic":"strict","properties":{"accessAgreementAcknowledged":{"type":"boolean"},"content":{"type":"binary"},"idleTimeoutExpiration":{"type":"date"},"lifespanExpiration":{"type":"date"},"provider":{"properties":{"name":{"type":"keyword"},"type":{"type":"keyword"}}},"usernameHash":{"type":"keyword"}}},"settings":{"index":{"routing":{"allocation":{"include":{"_tier_preference":"data_content"}}},"refresh_interval":"1s","hidden":"true","number_of_shards":"1","auto_expand_replicas":"0-1","provided_name":".kibana_security_session_1","creation_date":"1622445326126","priority":"1000","number_of_replicas":"1","uuid":"WF4HERRDRzqt5JS7V1EnhQ","version":{"created":"7110199"}}}}}

If you are able to get the details of the .kibana_security_session_1 index, then it exists. However, you need to make sure that the underlying shard is allocated.

Can you run GET _cluster/health and check that the cluster is yellow or green? If it is red, then it means that some primary shards are not allocated and we will need to fix that.

1 Like

yes, I have these 70 unassigned shards, is it possible to get back the missing shards or indices?

"status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 4,
  "number_of_data_nodes" : 4,
  "active_primary_shards" : 79,
  "active_shards" : 158,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 70,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 69.2982456140351

I tried to curl GET .kibana_security_session_1/_recovery?human but it only return {} empty bracket, which means it didn't recover

Am I still able to get this missing index? :frowning:

Can you run GET _cat/indices?v&expand_wildcards=all&health=red ? let's check how many indices are in a red status. I hope you have a snapshot for these...

1 Like

green  open   .logs-deprecation.elasticsearch-default   GnZnQyelROiudu6eIMtHyg   1   1      11872            0      6.1mb            3mb
red    
open   dsgp2                                     QdM8KG4XS-yWFBS53axTYA   1   1
green  open   .geoip_databases                          q-ZD4hjmSm27QrjggBmCzg   1   1         41            3     77.4mb         38.7mb
red    open   account_v1                                l_1aJoimRhmb_Y8sTlB-7Q   5   1       9318          613      6.2mb          3.1mb
green  open   .apm-custom-link                          7XAmcfG6RRiBbZxCkS9jzA   1   1          0            0       560b           280b
green  open   .kibana_task_manager_1                    7ILfE-AuTJGcH7O8YrRa4g   1   1          9         1638    646.1kb          323kb
green  open   dmys2                                     AUrjQ-57S_ihxg9G0eYtHQ   1   1          0            0       454b           227b
green  open   dhkg2                                     7sQG6AglTbSF32e-77Wdtw   1   1         25            2    379.6kb        189.8kb    
green  open   .kibana_task_manager_7.16.2_001           _OZSWzRQRbaxMhVSYLz38g   1   1         18          905      346kb        160.3kb
red    open   .transform-internal-007                   CtxERUkRTcyhx2WtSOXKLA   1   1
green  open   .kibana_7.16.2_001                        He3DW6DbQuyqk25EKaFz9A   1   1       2806           32      6.2mb          3.1mb
green  open   .transform-internal-005                   ClYKBEKPSu2S7Nm-85tg4A   1   1          3            0     47.6kb         23.8kb
green  open   .apm-agent-configuration                  dt26-GmdRwSG_pcwwno1XQ   1   1          0            0       454b           227b
green  open   exception_v1                              Sv7H0H_9R2yW8epBCv8jOQ   5   1         69            0    248.3kb        124.1kb
red    open   task_v1                                   NAYm1-zJQGCHRRNUDY4sXg   5   1         19            0     37.2kb         18.6kb
red    open   .kibana_1                                 rLwchx_JRE-908c8_fLlPA   1   1
red    open   .tasks                                    5_12AhT5SSODJrrMbCOciA   1   1
green  open   metrics-endpoint.metadata_current_default eBzehIx0Ssa3-AYBWtdu1w   1   1          0            0       560b           280b
green  open   .security-7                               XbR_BAGSQliSf4QjKw2WZQ   1   1         88            0      390kb          195kb
green  open   .kibana-event-log-7.11.0-000017           RMjRwSgMRz-t5ioJgdgEfg   1   1          0            0       452b           226b
red    open   .kibana-event-log-7.11.0-000016           krYpk1DES1y-6EtWGenLJA   1   1
red    open   .kibana-event-log-7.11.0-000015           68w_h_FOSAeiaQ4rrmRxPg   1   1
green  open   .kibana-event-log-7.11.0-000014           rNVbb172RP-x3ALQiwMYqg   1   1          0            0       452b           226b
red    open   dvnm2                                     sRcQyZUmT0KLDXM63n4DZQ   1   1
green  open   dhgi2                                     Rd-Q2oVeQXOD4k7DQouYoQ   1   1          2            0     58.3kb         29.1kb

here :slight_smile:

Interesting - the API (GET _cat/indices?v&expand_wildcards=all&health=red) should have only listed the indices in a red health. Did you run the same?

1 Like

yeah, I just copy this command. I'm confused too why those green included :sweat_smile:

hmmm that should not happen. anyway can you run GET /_cat/shards/.kibana*?v , let's check the Kibana shards and their allocation status.

1 Like
.kibana-event-log-7.16.2-000005 0     p      UNASSIGNED
.kibana-event-log-7.16.2-000005 0     r      UNASSIGNED
.kibana-event-log-7.16.2-000004 0     r      STARTED       8    36kb 10.138.136.163 localhost
.kibana-event-log-7.16.2-000004 0     p      STARTED       8    36kb 10.138.136.162 localhost
.kibana_7.16.2_001              0     p      STARTED    2806   3.1mb 10.138.136.161 localhost
.kibana_7.16.2_001              0     r      STARTED    2806   3.1mb 10.138.136.163 localhost
.kibana-event-log-7.11.0-000014 0     p      STARTED       0    226b 10.138.136.161 localhost
.kibana-event-log-7.11.0-000014 0     r      STARTED       0    226b 10.138.136.162 localhost
.kibana-event-log-7.16.2-000006 0     p      STARTED      13  26.3kb 10.138.136.163 localhost
.kibana-event-log-7.16.2-000006 0     r      STARTED      13  26.3kb 10.138.136.160 localhost
.kibana_task_manager_1          0     p      STARTED       9   323kb 10.138.136.162 localhost
.kibana_task_manager_1          0     r      STARTED       9   323kb 10.138.136.160 localhost
.kibana_security_session_1      0     p      UNASSIGNED
.kibana_security_session_1      0     r      UNASSIGNED
.kibana-event-log-7.16.2-000007 0     p      STARTED       1   6.1kb 10.138.136.161 localhost
.kibana-event-log-7.16.2-000007 0     r      STARTED       1   6.1kb 10.138.136.162 localhost
.kibana-event-log-7.11.0-000017 0     r      STARTED       0    226b 10.138.136.163 localhost
.kibana-event-log-7.11.0-000017 0     p      STARTED       0    226b 10.138.136.162 localhost
.kibana-event-log-7.11.0-000016 0     r      UNASSIGNED
.kibana-event-log-7.11.0-000016 0     p      UNASSIGNED
.kibana_task_manager_7.16.2_001 0     p      STARTED      18 160.3kb 10.138.136.161 localhost
.kibana_task_manager_7.16.2_001 0     r      STARTED      18 185.7kb 10.138.136.163 localhost
.kibana-event-log-7.11.0-000015 0     r      UNASSIGNED
.kibana-event-log-7.11.0-000015 0     p      UNASSIGNED
.kibana_1                       0     p      UNASSIGNED
.kibana_1                       0     r      UNASSIGNED

yey, found it in unassigned shards (i'm relieve now) :slight_smile:

Ok got it - let's check the cluster allocation explain for this .kibana_security_session_1 shard:

GET _cluster/allocation/explain
{
  "index": ".kibana_security_session_1",
  "primary": true,
  "shard": 0
}

Do you happen to know which snapshot you used during restore as well?

1 Like

hmm may I know how to convert it to curl? sorry for my ignorance

yeah, I do still have the copy of my snapshot(filename)

curl -u elastic:password -XGET "elastic_ip:port_number/_cluster/allocation/explain?pretty" -H 'Content-Type: application/json' -d'
{
  "index": ".kibana_security_session_1",
  "primary": true,
  "shard": 0
}
'

(i am assuming your cluster is secured, so you need to access it with the elastic superuser. replace the password, IP, port accordingly).

hmmm got this error

{"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}curl: (6) Could not resolve host: Content-Type; Unknown error

This is the command I used

curl -u superuser:password -XGET "localhost:port/_cluster/allocation/explain?pretty"-H 'Content-Type: application/json' -d' { "index": ".kibana_security_session_1", "primary": true, "shard": 0}'

I have to change the localhost and port here in Topic

I got it now

{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "unable to authenticate user [superuser] for REST request [/_cluster/allocation/explain?pretty]",
        "header" : {
          "WWW-Authenticate" : [
            "Basic realm=\"security\" charset=\"UTF-8\"",
            "Bearer realm=\"security\"",
            "ApiKey"
          ]
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "unable to authenticate user [superuser] for REST request [/_cluster/allocation/explain?pretty]",
    "header" : {
      "WWW-Authenticate" : [
        "Basic realm=\"security\" charset=\"UTF-8\"",
        "Bearer realm=\"security\"",
        "ApiKey"
      ]
    }
  },
  "status" : 401
}

Could not resolve host - it looks like a problem in the resolution of the hostname you specified. What about using the corresponding IP address? How did you run the previous GET _cat/shards, etc. it should be the same IP or hostname.

here, I forgot to change the user and password that's why I get above error :sweat_smile:

{
  "index" : ".kibana_security_session_1",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "ALLOCATION_FAILED",
    "at" : "2022-09-13T11:44:40.345Z",
    "failed_allocation_attempts" : 5,
    "details" : "failed shard on node [QZZkn1ASTsuKhhHvhOD03g]: failed recovery, failure RecoveryFailedException[[.kibana_security_session_1][0]: Recovery failed on {localhost}{QZZkn1ASTsuKhhHvhOD03g}{OynA96iQSy-Fl-dNk1R6pA}{10.138.136.161}{10.138.136.161:9303}{cdfhilmrstw}{ml.machine_memory=12409638912, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=2147483648}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg]]; nested: SnapshotMissingException[[Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] is missing]; nested: NoSuchFileException[/opt/elasticsearch/backupelastic/sys/indices/yOtUwKEdRtmClYRCFSO2ow/0/snap-kvpVPTpxQryWjm8Z_fweGg.dat]; ",
    "last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "QZZkn1ASTsuKhhHvhOD03g",
      "node_name" : "localhost",
      "transport_address" : "10.138.136.161:9303",
      "node_attributes" : {
        "ml.machine_memory" : "12409638912",
        "ml.max_open_jobs" : "512",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "2147483648",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "restore_in_progress",
          "decision" : "NO",
          "explanation" : "shard has failed to be restored from the snapshot [Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] because of [failed shard on node [QZZkn1ASTsuKhhHvhOD03g]: failed recovery, failure RecoveryFailedException[[.kibana_security_session_1][0]: Recovery failed on {localhost}{QZZkn1ASTsuKhhHvhOD03g}{OynA96iQSy-Fl-dNk1R6pA}{10.138.136.161}{10.138.136.161:9303}{cdfhilmrstw}{ml.machine_memory=12409638912, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=2147483648}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg]]; nested: SnapshotMissingException[[Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] is missing]; nested: NoSuchFileException[/opt/elasticsearch/backupelastic/sys/indices/yOtUwKEdRtmClYRCFSO2ow/0/snap-kvpVPTpxQryWjm8Z_fweGg.dat]; ] - manually close or delete the index [.kibana_security_session_1] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
        }
      ]
    },
    {
      "node_id" : "RjGDldfgTT-8R5j7mNtAQw",
      "node_name" : "localhost",
      "transport_address" : "10.138.136.163:9303",
      "node_attributes" : {
        "ml.machine_memory" : "12409647104",
        "ml.max_open_jobs" : "512",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "2147483648",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 2,
      "deciders" : [
        {
          "decider" : "restore_in_progress",
          "decision" : "NO",
          "explanation" : "shard has failed to be restored from the snapshot [Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] because of [failed shard on node [QZZkn1ASTsuKhhHvhOD03g]: failed recovery, failure RecoveryFailedException[[.kibana_security_session_1][0]: Recovery failed on {localhost}{QZZkn1ASTsuKhhHvhOD03g}{OynA96iQSy-Fl-dNk1R6pA}{10.138.136.161}{10.138.136.161:9303}{cdfhilmrstw}{ml.machine_memory=12409638912, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=2147483648}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg]]; nested: SnapshotMissingException[[Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] is missing]; nested: NoSuchFileException[/opt/elasticsearch/backupelastic/sys/indices/yOtUwKEdRtmClYRCFSO2ow/0/snap-kvpVPTpxQryWjm8Z_fweGg.dat]; ] - manually close or delete the index [.kibana_security_session_1] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
        }
      ]
    },
    {
      "node_id" : "BsmpENn1R1aWOC23pyWc8Q",
      "node_name" : "localhost",
      "transport_address" : "10.138.136.162:9303",
      "node_attributes" : {
        "ml.machine_memory" : "12409647104",
        "ml.max_open_jobs" : "512",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "2147483648",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 3,
      "deciders" : [
        {
          "decider" : "restore_in_progress",
          "decision" : "NO",
          "explanation" : "shard has failed to be restored from the snapshot [Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] because of [failed shard on node [QZZkn1ASTsuKhhHvhOD03g]: failed recovery, failure RecoveryFailedException[[.kibana_security_session_1][0]: Recovery failed on {localhost}{QZZkn1ASTsuKhhHvhOD03g}{OynA96iQSy-Fl-dNk1R6pA}{10.138.136.161}{10.138.136.161:9303}{cdfhilmrstw}{ml.machine_memory=12409638912, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=2147483648}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg]]; nested: SnapshotMissingException[[Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] is missing]; nested: NoSuchFileException[/opt/elasticsearch/backupelastic/sys/indices/yOtUwKEdRtmClYRCFSO2ow/0/snap-kvpVPTpxQryWjm8Z_fweGg.dat]; ] - manually close or delete the index [.kibana_security_session_1] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
        }
      ]
    },
    {
      "node_id" : "lxb0wEi-Sr665MEcO_7IGA",
      "node_name" : "localhost",
      "transport_address" : "10.138.136.160:9303",
      "node_attributes" : {
        "ml.machine_memory" : "12409647104",
        "ml.max_open_jobs" : "512",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "2147483648",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 4,
      "deciders" : [
        {
          "decider" : "restore_in_progress",
          "decision" : "NO",
          "explanation" : "shard has failed to be restored from the snapshot [Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] because of [failed shard on node [QZZkn1ASTsuKhhHvhOD03g]: failed recovery, failure RecoveryFailedException[[.kibana_security_session_1][0]: Recovery failed on {localhost}{QZZkn1ASTsuKhhHvhOD03g}{OynA96iQSy-Fl-dNk1R6pA}{10.138.136.161}{10.138.136.161:9303}{cdfhilmrstw}{ml.machine_memory=12409638912, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=2147483648}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg]]; nested: SnapshotMissingException[[Backup_Repository:daily_backup--2022-09-13_11-00-00-evmwxc0usa-msh9csmgy2a/kvpVPTpxQryWjm8Z_fweGg] is missing]; nested: NoSuchFileException[/opt/elasticsearch/backupelastic/sys/indices/yOtUwKEdRtmClYRCFSO2ow/0/snap-kvpVPTpxQryWjm8Z_fweGg.dat]; ] - manually close or delete the index [.kibana_security_session_1] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
        }
      ]
    }
  ]
}