Kibana fails to start after 7.9.3 -> 7.10.0 upgrade

Just updated Elastic stack from 7.9.3 to 7.10 and now kibana fails to start.
I didn't notice anything in breaking changes that would apply.
License is Basic, authentication is not used.

[16:24:26.937] [error][data][elasticsearch] [resource_already_exists_exception]: index [.kibana_task_manager_3/u2TDKzdYRtCe9HzES1JZ2A] already exists
[16:24:26.972] [error][data][elasticsearch] [resource_already_exists_exception]: index [.kibana_6/qWq4O_7RT-uy8dP8O285UQ] already exists

Elasticsearch status is green, but the log contains many warning like this

{"type": "server", "timestamp": "2020-11-17T16:17:03,602Z", "level": "WARN", "component": "o.e.x.m.MonitoringService", "cluster.name": "my-admin", "node.name": "my-admin.example.com", "message": "monitoring execution failed", "cluster.uuid": "mKtxgFp7QSmXSIP0NqXDKw", "node.id": "P5NnsStFQgmhq-rDesa3PA" ,
"stacktrace": ["org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks",
"at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:109) [x-pack-monitoring-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:132) [x-pack-monitoring-7.10.0.jar:7.10.0]",
"at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:108) [x-pack-monitoring-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:89) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:83) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.ActionListener$6.onResponse(ActionListener.java:282) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.ActionListener$4.onResponse(ActionListener.java:163) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:566) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:561) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:98) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:869) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:841) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:900) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:745) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:860) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:335) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:149) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:120) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:112) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:846) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:900) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:745) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:849) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onNewClusterState(ClusterStateObserver.java:321) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.clusterChanged(ClusterStateObserver.java:196) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateListener(ClusterApplierService.java:526) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateListeners(ClusterApplierService.java:517) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:484) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService.access$000(ClusterApplierService.java:68) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.10.0.jar:7.10.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]",
"at java.lang.Thread.run(Thread.java:834) [?:?]",
"Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: bulk [default_local] reports failures when exporting documents",
"at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:121) ~[?:?]",
"... 39 more"] }

What is the output from _cat/indices/.kibana*?v?

health status index                          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana-event-log-7.9.3-000001 OZVXYi2YTriIllmsirJ0rw   1   0          2            0       11kb           11kb
green  open   .kibana-event-log-7.9.2-000001 mD6Q7HreRqGDOI4L2KqMIA   1   0          5            0     27.2kb         27.2kb
green  open   .kibana-event-log-7.9.1-000002 COvrS3OTQh-2ZeuMMuZxpw   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.9.1-000001 DI9jV10iThS2bJFZKEF_ZQ   1   0          5            0     27.2kb         27.2kb
green  open   .kibana-event-log-7.9.0-000001 WBtpQuGlQS6x1wz3jGj_dg   1   0          1            0      5.6kb          5.6kb
green  open   .kibana-event-log-7.9.2-000002 nT8W1WTSTXu3_-S1_G2JFg   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.9.0-000002 8z9iuFmbSpeIjgJPsO_EUA   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.9.0-000003 ByZNhawPTWOKN0vstkkkWw   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.9.1-000003 2NPDKbsuTQyYLm3fKt-k7Q   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.9.0-000004 ujzjJt2WRaa-R0gKOzkXgQ   1   0          0            0       208b           208b
green  open   .kibana_task_manager_2         gTd9FhxCRqSNFelvXllctg   1   0          6         3110    376.3kb        376.3kb
green  open   .kibana_task_manager_1         SDDs_gWoRbOzlvlkaX7YFA   1   0          5            2     38.4kb         38.4kb
green  open   .kibana_task_manager_3         u2TDKzdYRtCe9HzES1JZ2A   1   0          0            0       208b           208b
green  open   .kibana_6                      qWq4O_7RT-uy8dP8O285UQ   1   0          0            0       208b           208b
green  open   .kibana_5                      4oZxWJ8fQn-VszXMBGVhvQ   1   0        124           51     10.4mb         10.4mb
green  open   .kibana_2                      L9K-e3LJShC5YylnFQtuHA   1   0        336           11    215.8kb        215.8kb
green  open   .kibana_1                      n2uZbG6gTLmVPAFFXzWYmA   1   0        214            9    190.1kb        190.1kb
green  open   .kibana_4                      3pc8-s_PTUS0bnvDJHevCQ   1   0        270           13     10.5mb         10.5mb
green  open   .kibana_3                      uZS-YaRdRWmpsU7sLVceOQ   1   0        405            1    173.7kb        173.7kb
green  open   .kibana-event-log-7.8.1-000004 x5ZGxGN1TfOKUjIH2DScDQ   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.8.0-000005 ksMToselRVOu8RbKRnrlUQ   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.8.0-000006 m61gMHKtS7ulOUGV1E4Pfg   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.8.1-000001 5CKQ2-ncQoWS1HIzAwcEmg   1   0          2            0     10.5kb         10.5kb
green  open   .kibana-event-log-7.8.0-000003 58q122ODRHashHTlM2RLHQ   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.8.1-000002 gK_qqKxoRKOPS1ZLi6G-pw   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.8.0-000004 Ot5pXywMTmu-YSjV9Rj8zw   1   0          0            0       208b           208b
green  open   .kibana-event-log-7.8.1-000003 DtxRQxfVT86y4EerchCsZQ   1   0          0            0       208b           208b

To identify the root cause of the upgrade failure we would need to see the Kibana logs at the time when Kibana was upgraded (sharing the logs after restarting Kibana won't be useful as once a migration has failed, Kibana won't retry the migration until the migration lock has been removed).

For instructions to remove a migration lock to allow Kibana to retry the migration see https://www.elastic.co/guide/en/kibana/current/upgrade-migrations.html

I removed the locks and started kibana again. It was successful this time, but logged:

[13:33:29.116] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [1])
[13:33:29.124] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [1])
[13:33:29.125] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [1])
[13:33:29.139] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [1])
[13:33:29.198] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [1])
[13:33:29.375] [info][listening] Server running at http://localhost:5601

Looking back in the logs now to the initial upgrade, it looks like it was just a RequestTimeout that caused it to fail.

These conflict errors are expected and can safely be ignored.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.