Unable to start Kibana after upgrade to 8.17.0

Following an upgrade of Kibana the Kibana service keeps shutting down and restarting and the web interface is unavailable.

The current issue appears to be permission related on an index called ".kibana_task_manager_8.7.1_001". The index is write blocked.

Trying to update the index to remove the write block results in "action [indices:admin/settings/update] is unauthorized for user [elastic] with effective roles [superuser] on restricted indices [.kibana_task_manager_8.7.1_001], this action is granted by the index privileges [manage,all]"

The query I have ran:
Query:

curl -u "elastic:xxxxxxxxx" -k -X PUT "https://172.16.1.90:9200/.kibana_task_manager_8.7.1_001/_settings?pretty" -H 'Content-Type: application/json' -d'
{
 "index": {
  "blocks": {
   "write" : false
   }
  }
}
'

result:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "action [indices:admin/settings/update] is unauthorized for user [elastic] with effective roles [superuser] on restricted indices [.kibana_task_manager_8.7.1_001], this action is granted by the index privileges [manage,all]"
      }
    ],
    "type" : "security_exception",
    "reason" : "action [indices:admin/settings/update] is unauthorized for user [elastic] with effective roles [superuser] on restricted indices [.kibana_task_manager_8.7.1_001], this action is granted by the index privileges [manage,all]"
  },
  "status" : 403
}

Can anyone show me how I can add the appropriate permissions to the appropriate role or an alternative way to fix the write block please?

Cheers

Kibana logs (with debug):

{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2025-01-09T16:28:16.785+00:00","message":"[.kibana_task_manager] TRANSFORMED_DOCUMENTS_BULK_INDEX RESPONSE","log":{"level":"DEBUG","logger":"savedobjects-service"},"process":{"pid":344,"uptime":20.1497312},"trace":{"id":"daecd25631a45a70d4d92883df53e1f0"},"transaction":{"id":"c47c0154e6b9d61d"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2025-01-09T16:28:16.786+00:00","message":"TRANSFORMED_DOCUMENTS_BULK_INDEX received unexpected action response: {\"type\":\"target_index_had_write_block\"}","error":{"message":"TRANSFORMED_DOCUMENTS_BULK_INDEX received unexpected action response: {\"type\":\"target_index_had_write_block\"}","type":"Error","stack_trace":"Error: TRANSFORMED_DOCUMENTS_BULK_INDEX received unexpected action response: {\"type\":\"target_index_had_write_block\"}\n    at throwBadResponse (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\model\\helpers.js:56:9)\n    at model (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\model\\model.js:1337:39)\n    at E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\migrations_state_action_machine.js:52:24\n    at stateActionMachine (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\state_action_machine.js:68:22)\n    at processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at migrationStateActionMachine (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\migrations_state_action_machine.js:48:24)\n    at async Promise.all (index 2)\n    at SavedObjectsService.start (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-server-internal\\src\\saved_objects_service.js:199:7)\n    at Server.start (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-root-server-internal\\src\\server.js:393:31)\n    at Root.start (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-root-server-internal\\src\\root\\index.js:66:14)\n    at bootstrap (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-root-server-internal\\src\\bootstrap.js:119:5)\n    at Command.<anonymous> (E:\\apps\\kibana\\kibana-8.17.0\\src\\cli\\serve\\serve.js:234:5)"},"log":{"level":"ERROR","logger":"savedobjects-service"},"process":{"pid":344,"uptime":20.1531979},"trace":{"id":"daecd25631a45a70d4d92883df53e1f0"},"transaction":{"id":"c47c0154e6b9d61d"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2025-01-09T16:28:16.789+00:00","message":"Kibana is shutting down","log":{"level":"INFO","logger":"root"},"process":{"pid":344,"uptime":20.1536065},"trace":{"id":"daecd25631a45a70d4d92883df53e1f0"},"transaction":{"id":"c47c0154e6b9d61d"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2025-01-09T16:28:16.790+00:00","message":"Reason: Unable to complete saved object migrations for the [.kibana_task_manager] index. Error: TRANSFORMED_DOCUMENTS_BULK_INDEX received unexpected action response: {\"type\":\"target_index_had_write_block\"}\n[Error: TRANSFORMED_DOCUMENTS_BULK_INDEX received unexpected action response: {\"type\":\"target_index_had_write_block\"}\n    at throwBadResponse (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\model\\helpers.js:56:9)\n    at model (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\model\\model.js:1337:39)\n    at E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\migrations_state_action_machine.js:52:24\n    at stateActionMachine (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\state_action_machine.js:68:22)\n    at processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at migrationStateActionMachine (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-migration-server-internal\\src\\migrations_state_action_machine.js:48:24)\n    at async Promise.all (index 2)\n    at SavedObjectsService.start (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-saved-objects-server-internal\\src\\saved_objects_service.js:199:7)\n    at Server.start (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-root-server-internal\\src\\server.js:393:31)\n    at Root.start (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-root-server-internal\\src\\root\\index.js:66:14)\n    at bootstrap (E:\\apps\\kibana\\kibana-8.17.0\\node_modules\\@kbn\\core-root-server-internal\\src\\bootstrap.js:119:5)\n    at Command.<anonymous> (E:\\apps\\kibana\\kibana-8.17.0\\src\\cli\\serve\\serve.js:234:5)]","log":{"level":"FATAL","logger":"root"},"process":{"pid":344,"uptime":20.1545131},"trace":{"id":"daecd25631a45a70d4d92883df53e1f0"},"transaction":{"id":"c47c0154e6b9d61d"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2025-01-09T16:28:16.791+00:00","message":"stopping server","log":{"level":"DEBUG","logger":"server"},"process":{"pid":344,"uptime":20.1547631},"trace":{"id":"daecd25631a45a70d4d92883df53e1f0"},"transaction":{"id":"c47c0154e6b9d61d"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2025-01-09T16:28:16.792+00:00","message":"stopping http server","log":{"lev

Hi @jheaton ,

Did you follow Upgrade Kibana documentation recommendations?

From what version are you upgrading?

I upgraded from 8.13.4.

I did not perform a snapshot as I don't have the space.

The cluster was a healthy green.

Hi @Maxim_Palenov

Thanks for your help. From the link you provided I was led to this page:
https://www.elastic.co/guide/en/kibana/current/resolve-migrations-failures.html

Which detailed how to add a a role to grant access to the kibana indices (step 1), create a temp superuser with the new role added (step 2) . I was then able to reset the write block on all the indices that had them following (step 3). I did not have to delete anything (step4).

On initial restart of the kibana service the login page came up but would not allow any user to log in. I stopped the kibana service to amend the logging from debug back to info and when I restarted it I was able to log in and access everything as normal. The cluster status was initially red but went to green after approx 5 mins.

The issue is now resolved.

Cheers!

1 Like