Kibana v7.16.1 login issue. Not able to login (ResponseError: version_conflict_engine_exception)

@ropc , Currently we have upgraded to Kibana and ES to 8.3.3. The login issue is getting intermittently. Please find below kibana logs for reference and please suggest on next steps.

Also, When checked in Kibana status page under stack monitoring section. some of the plugins status shows yellow with plugin degraded message. PFA screenshot for the same.

Would this degraded plugins status cause login issue?

Kibana log:

Aug 23 11:56:17 <Kibana-Host> kibana[472]: [2022-08-23T11:56:17.051+00:00][ERROR][plugins.taskManager] [WorkloadAggregator]: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:18 <Kibana-Host> kibana[472]: [2022-08-23T11:56:18.018+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:21 <Kibana-Host> kibana[472]: [2022-08-23T11:56:21.017+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:24 <Kibana-Host> kibana[472]: [2022-08-23T11:56:24.017+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:27 <Kibana-Host> kibana[472]: [2022-08-23T11:56:27.019+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:30 <Kibana-Host> kibana[472]: [2022-08-23T11:56:30.019+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:33 <Kibana-Host> kibana[472]: [2022-08-23T11:56:33.019+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:36 <Kibana-Host> kibana[472]: [2022-08-23T11:56:36.019+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:39 <Kibana-Host> kibana[472]: [2022-08-23T11:56:39.021+00:00][ERROR][plugins.taskManager] Failed to poll for work: NoLivingConnectionsError: There are no living connections
Aug 23 11:56:40 <Kibana-Host> systemd[1]: Stopped Kibana.
-- Subject: Unit kibana.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kibana.service has finished shutting down.
Aug 23 11:56:40 <Kibana-Host> systemd[1]: Started Kibana.
-- Subject: Unit kibana.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kibana.service has finished starting up.
-- 
-- The start-up result is done.
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:53.982+00:00][INFO ][plugins-service] Plugin "cloudSecurityPosture" is disabled.
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.106+00:00][INFO ][http.server.Preboot] http server running at http://10.120.115.34:5601
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.174+00:00][INFO ][plugins-system.preboot] Setting up [1] plugins: [interactiveSetup]
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.232+00:00][WARN ][config.deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.525+00:00][INFO ][plugins-system.standard] Setting up [118] plugins: [translations,monitoringCollection,licensing,globalSearch,globalSearchProviders,features,mapsEms,licenseApiGuard,usageCollection,taskManager,telemetryCollectionManager,telemetryCollectionXpack,share,embeddable,uiActionsEnhanced,screenshotMode,banners,newsfeed,fieldFormats,expressions,eventAnnotation,dataViews,charts,esUiShared,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,spaces,security,lists,encryptedSavedObjects,cloud,snapshotRestore,screenshotting,telemetry,licenseManagement,kibanaUsageCollection,eventLog,actions,console,bfetch,data,watcher,reporting,fileUpload,ingestPipelines,alerting,aiops,unifiedSearch,savedObjects,triggersActionsUi,transform,stackAlerts,ruleRegistry,graph,savedObjectsTagging,savedObjectsManagement,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,controls,dataViewFieldEditor,visualizations,canvas,visTypeXy,visTypeVislib,visTypeVega,visTypeTimeseries,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeHeatmap,visTypeMarkdown,dashboard,dashboardEnhanced,expressionXY,expressionTagcloud,expressionPartitionVis,visTypePie,expressionMetricVis,expressionHeatmap,expressionGauge,visTypeGauge,sharedUX,discover,lens,maps,dataVisualizer,ml,cases,timelines,sessionView,observability,fleet,synthetics,osquery,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,discoverEnhanced,dataViewManagement]
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.546+00:00][INFO ][plugins.taskManager] TaskManager is identified by the Kibana UUID: a13bf5a9-8186-4da5-a7a8-3bd329a934bf
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.630+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.665+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
Aug 23 11:56:54 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:54.840+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.607+00:00][WARN ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 7.9.2009 OS. Automatically setting 'xpack.screenshotting.browser.chromium.disableSandbox: true'.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.784+00:00][INFO ][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.785+00:00][INFO ][savedobjects-service] Starting saved objects migrations
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.835+00:00][INFO ][savedobjects-service] [.kibana_task_manager] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 17ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.909+00:00][INFO ][savedobjects-service] [.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 74ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.917+00:00][INFO ][savedobjects-service] [.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 8ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.925+00:00][INFO ][savedobjects-service] [.kibana] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 125ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.928+00:00][INFO ][savedobjects-service] [.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 11ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.931+00:00][INFO ][savedobjects-service] [.kibana] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 6ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.948+00:00][INFO ][savedobjects-service] [.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 20ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.951+00:00][INFO ][savedobjects-service] [.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 20ms.
Aug 23 11:56:55 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:55.956+00:00][INFO ][savedobjects-service] [.kibana] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 5ms.
Aug 23 11:56:56 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:56.058+00:00][INFO ][savedobjects-service] [.kibana] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 102ms.
Aug 23 11:56:56 <Kibana-Host> kibana[6372]: [2022-08-23T11:56:56.499+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
Aug 23 11:57:15 <Kibana-Host> kibana[6372]: [2022-08-23T11:57:15.015+00:00][INFO ][savedobjects-service] [.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 18957ms.
Aug 23 11:57:15 <Kibana-Host> kibana[6372]: [2022-08-23T11:57:15.015+00:00][INFO ][savedobjects-service] [.kibana] Migration completed after 19215ms
Aug 23 11:57:56 <Kibana-Host> kibana[6372]: [2022-08-23T11:57:56.010+00:00][ERROR][savedobjects-service] [.kibana_task_manager] Action failed with '[timeout_exception] Timed out waiting for completion of [Task{id=13293779, type='transport', action='indices:data/write/update/byquery', description='update-by-query [.kibana_task_manager_8.3.3_001]', parentTask=unset, startTime=1661255815945, startTimeNanos=8838582368699888}]'. Retrying attempt 1 in 2 seconds.
Aug 23 11:57:56 <Kibana-Host> kibana[6372]: [2022-08-23T11:57:56.011+00:00][INFO ][savedobjects-service] [.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 60062ms.
Aug 23 11:58:58 <Kibana-Host> kibana[6372]: [2022-08-23T11:58:58.109+00:00][ERROR][savedobjects-service] [.kibana_task_manager] Action failed with '[timeout_exception] Timed out waiting for completion of [Task{id=13293779, type='transport', action='indices:data/write/update/byquery', description='update-by-query [.kibana_task_manager_8.3.3_001]', parentTask=unset, startTime=1661255815945, startTimeNanos=8838582368699888}]'. Retrying attempt 2 in 4 seconds.
Aug 23 11:58:58 <Kibana-Host> kibana[6372]: [2022-08-23T11:58:58.109+00:00][INFO ][savedobjects-service] [.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 62099ms.
Aug 23 11:59:02 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:02.117+00:00][INFO ][savedobjects-service] [.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 4008ms.
Aug 23 11:59:02 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:02.117+00:00][INFO ][savedobjects-service] [.kibana_task_manager] Migration completed after 126299ms
Aug 23 11:59:02 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:02.128+00:00][INFO ][status] Kibana is now unavailable
Aug 23 11:59:02 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:02.129+00:00][INFO ][plugins-system.preboot] Stopping all plugins.
Aug 23 11:59:02 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:02.130+00:00][INFO ][plugins-system.standard] Starting [118] plugins: [translations,monitoringCollection,licensing,globalSearch,globalSearchProviders,features,mapsEms,licenseApiGuard,usageCollection,taskManager,telemetryCollectionManager,telemetryCollectionXpack,share,embeddable,uiActionsEnhanced,screenshotMode,banners,newsfeed,fieldFormats,expressions,eventAnnotation,dataViews,charts,esUiShared,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,spaces,security,lists,encryptedSavedObjects,cloud,snapshotRestore,screenshotting,telemetry,licenseManagement,kibanaUsageCollection,eventLog,actions,console,bfetch,data,watcher,reporting,fileUpload,ingestPipelines,alerting,aiops,unifiedSearch,savedObjects,triggersActionsUi,transform,stackAlerts,ruleRegistry,graph,savedObjectsTagging,savedObjectsManagement,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,controls,dataViewFieldEditor,visualizations,canvas,visTypeXy,visTypeVislib,visTypeVega,visTypeTimeseries,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeHeatmap,visTypeMarkdown,dashboard,dashboardEnhanced,expressionXY,expressionTagcloud,expressionPartitionVis,visTypePie,expressionMetricVis,expressionHeatmap,expressionGauge,visTypeGauge,sharedUX,discover,lens,maps,dataVisualizer,ml,cases,timelines,sessionView,observability,fleet,synthetics,osquery,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,discoverEnhanced,dataViewManagement]
Aug 23 11:59:03 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:03.100+00:00][INFO ][plugins.monitoring.monitoring] config sourced from: production cluster
Aug 23 11:59:04 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:04.924+00:00][INFO ][http.server.Kibana] http server running at http://10.120.115.34:5601
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.118+00:00][INFO ][plugins.monitoring.monitoring.kibana-monitoring] Starting monitoring stats collection
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.119+00:00][INFO ][plugins.fleet] Beginning fleet setup
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.167+00:00][INFO ][status] Kibana is now degraded (was unavailable)
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.473+00:00][INFO ][plugins.fleet] Fleet setup completed
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.482+00:00][INFO ][plugins.securitySolution] Dependent plugin setup complete - Starting ManifestTask
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.862+00:00][INFO ][plugins.ruleRegistry] Installed common resources shared between all indices
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.862+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.uptime.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.863+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-security.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.863+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .preview.alerts-security.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.863+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.logs.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.863+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.metrics.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.864+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.apm.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.873+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.875+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-security.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.877+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.878+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
Aug 23 11:59:05 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:05.880+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
Aug 23 11:59:06 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:06.133+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .preview.alerts-security.alerts
Aug 23 11:59:30 <Kibana-Host> kibana[6372]: [2022-08-23T11:59:30.369+00:00][INFO ][plugins.ml] Task ML:saved-objects-sync-task: scheduled with interval 1h
Aug 23 12:00:17 <Kibana-Host> kibana[6372]: [2022-08-23T12:00:17.187+00:00][INFO ][plugins.security.routes] Logging in with provider "basic" (basic)
Aug 23 12:00:30 <Kibana-Host> kibana[6372]: [2022-08-23T12:00:30.379+00:00][ERROR][plugins.taskManager] Failed to poll for work: Error: work has timed out
Aug 23 12:01:30 <Kibana-Host> kibana[6372]: [2022-08-23T12:01:30.384+00:00][ERROR][plugins.taskManager] Failed to poll for work: Error: work has timed out
Aug 23 12:01:33 <Kibana-Host> kibana[6372]: [2022-08-23T12:01:33.751+00:00][INFO ][plugins.ml] Task ML:saved-objects-sync-task: No ML saved objects in need of synchronization
Aug 23 12:01:54 <Kibana-Host> kibana[6372]: [2022-08-23T12:01:54.237+00:00][INFO ][status] Kibana is now available (was degraded)

Regards,
Avinash

@Avinash_09 - what is interesting in the logs you provided is the relative slowness in the saved objects migration (which happens every time you restart Kibana). This is not related to the login issue but may highlight some performance issues in Elasticsearch. If the cluster is overwhelmed, that could explain some of the login issues you may have.

You may want to check if the cluster is healthy. If you have set-up monitoring in the Elasticsearch cluster, it may be good to check the metrics as well and check the requests latency, CPU / JVM heap usage. A few APIs that may be useful here as well:

  • GET _cluster/health
  • GET _cat/nodes?v
  • GET _cat/thread_pool

Hi @ropc

We are still seeing the version_conflict_engine_exception and this is causing the login issue continuously. As per earlier suggestions, We have upgraded to Elastic+Kibana to 8.4.0 but still the issue persists.

Is there any resolution to fix this issue. Your help is much appreciated here. Thanks in advance.

PFA below log for reference:

[2022-09-09T11:57:46.478+00:00][DEBUG][elasticsearch.query.data] 409 - 547.0B
PUT /.kibana_security_session/_create/D2yUJiPS1jq37qIuJBQIS3aOWx%2B5xCxZueEV5dL8Hr4%3D?refresh=wait_for&require_alias=true
{"provider":{"type":"basic","name":"basic"},"idleTimeoutExpiration":1662753436462,"lifespanExpiration":1665316636462,"usernameHash":"8b85e46e71104ee25dbe6ec8376ddadcb22784a19d3d207be08a2bc51f0ebc7c","content":"jlwS3C7S8TM+qZELFbJmWyc5lmcq/+gVeMGwJH+kFVRAPitsSj5v1IOlBbdNkBTgVk1APxvAlBikI0Arbz8ztnvaTWzGtzvZqOLYGyBiPQpn0jkseaOF48megiyH4j5sZqKOYq5qcOo4/U/JiBCWvTm+t9R//hNp+VggdwPTs5GYhWF7NeKCT4hKr2AqUVuFq3P862H182NKerp0a/+oco8ILGTi4tew75sad7+n4BHQXORQIkyGFDWNjKK37uuCMtQmS6vxe+5UM+i/pLH+SS0kf0Jh4YiCSjA+amL+LlAn2gUUT5HMhBfWh0QYPt3FBMOMw8Y1J3XnHLW9MZ8I2lAEBko="} [version_conflict_engine_exception]: [D2yUJiPS1jq37qIuJBQIS3aOWx+5xCxZueEV5dL8Hr4=]: version conflict, document already exists (current version [1])
[2022-09-09T11:57:46.479+00:00][ERROR][plugins.security.session.index] Failed to create session value: version_conflict_engine_exception: [version_conflict_engine_exception] Reason: [D2yUJiPS1jq37qIuJBQIS3aOWx+5xCxZueEV5dL8Hr4=]: version conflict, document already exists (current version [1])
[2022-09-09T11:57:46.479+00:00][ERROR][http] ResponseError: version_conflict_engine_exception: [version_conflict_engine_exception] Reason: [D2yUJiPS1jq37qIuJBQIS3aOWx+5xCxZueEV5dL8Hr4=]: version conflict, document already exists (current version [1])
    at KibanaTransport.request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:476:27)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/target_node/create_transport.js:58:16)
    at ClientTraced.CreateApi [as create] (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/create.js:43:12)
    at SessionIndex.writeNewSessionDocument (/usr/share/kibana/x-pack/plugins/security/server/session_management/session_index.js:552:9)
    at SessionIndex.create (/usr/share/kibana/x-pack/plugins/security/server/session_management/session_index.js:180:11)
    at Session.create (/usr/share/kibana/x-pack/plugins/security/server/session_management/session.js:143:31)
    at Authenticator.updateSessionValue (/usr/share/kibana/x-pack/plugins/security/server/authentication/authenticator.js:598:25)
    at Authenticator.login (/usr/share/kibana/x-pack/plugins/security/server/authentication/authenticator.js:218:37)
    at /usr/share/kibana/x-pack/plugins/security/server/routes/authentication/common.js:156:34
    at Router.handle (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/target_node/router.js:163:30)
    at handler (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/target_node/router.js:124:50)
    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)
    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)
    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:371:32)
    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:281:9)

Regards,
Avinash

@Avinash_09 , just for sanity check, can you run this API and share the results with us:

GET .kibana_security_session*,.security*/_settings?filter_path=*.settings.index.refresh_interval

@ropc ,

As I have issue with login to kibana. I ran the below query from one of the elastic node. Please find the response below and suggest us on next steps.

Query: curl -X GET "https://host_name:port/.kibana_security_session*,.security*/_settings?filter_path=*.settings.index.refresh_interval"  -u 
user_name:password -k

Output Response:

{".kibana_security_session_1":{"settings":{"index":{"refresh_interval":"1s"}}},".security-7":{"settings":{"index":{"refresh_interval":"1s"}}},".security-profile-8":{"settings":{"index":{"refresh_interval":"1s"}}}}

Thanks in advance.

Regards,
Avinash

@Avinash_09 - could you try to invalidate all the existing sessions and let's see if you can login after that.

You can run this Kibana API:

curl -k -u elastic:password -X POST "<kibana_url>:5601/api/security/session/_invalidate" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d'
{
  "match" : "all"
}
'

@ropc , As per update ran the provided query. We have clustered environment with 2 kibana nodes and hence ran on both the nodes. Below is the response.

Query:

curl -k -u user:password -X POST "http://<kibana_host>:5601/api/security/session/_invalidate" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"match" : "all"}'

Output Response:

{"total":0}

No luck, Issue still persists.

Regards,
Avinash

@Avinash_09 - hmmm - this is puzzling.

Kibana should be creating a session and storing this document in the .kibana_security_session index. Based on the above error, it seems that Kibana is trying to create a session with an ID that already exists (which does not make much sense).

Our Kibana developers wrote some documentation around authentication issues: Kibana authentication troubleshooting guide). This is pretty much what we are trying to cover here as well.

Given that you are using 2 Kibana instances with a proxy, please verify again that these settings are the same in the two Kibana instances (I think you checked that before but let's do this again):

If you are connecting directly to one of the Kibana URL (by-passing the proxy), do you still face the same issue?

@ropc ,

We have followed the standard cluster config settings as per official recommendations. But, Will cross check and confirm on the same again.

Coming to this part of your question. Yes, Even with internal URL directly (by-passing the proxy) also throwing same login issue as well.

Regards,
Avinash

@Avinash_09

Let's see if we get the same error after recreating the Kibana security session index. We will first delete the index:

curl -k -u user:password -X DELETE https://elasticsearch_host:port/.kibana_security_session*

Reload the Kibana URL in the browser and attempt to login again.

@ropc

I tried to delete the security session index and getting below exception. But, We have all the super user privileges and also given these privileges to user [delete_index,manage,all].

Query:

curl -k -u <elastic_user>:<elastic_password> -X DELETE https://dev5079:9200/.kibana_security_session_1

Output Response:

{"error":{"root_cause":[{"type":"security_exception","reason":"action [indices:admin/delete] is unauthorized for user [<elastic_user>] with roles [superuser,kibana-admin-role] on restricted indices [.kibana_security_session_1], this action is granted by the index privileges [delete_index,manage,all]"}],"type":"security_exception","reason":"action [indices:admin/delete] is unauthorized for user [<elastic_user>] with roles [superuser,kibana-admin-role] on restricted indices [.kibana_security_session_1], this action is granted by the index privileges [delete_index,manage,all]"},"status":403}

Attached the screenshot of the role permissions to user. Please let me know with next steps.

Also, During the time of issue, we are seeing below exceptions in Elasticsearch nodes. Please go through them and see if this helps in further debugging the issue.

[2022-09-12T05:46:11,887][WARN ][r.suppressed             ] [Master] path: /_tasks/v4bSzmAhRVyCqhdK7RSEbA%3A10784056, params: {task_id=v4bSzmAhRVyCqhdK7RSEbA:10784056, wait_for_completion=true, timeout=60s}
org.elasticsearch.transport.RemoteTransportException: [Master-3][10.120.114.98:9300][cluster:monitor/task/get]
Caused by: org.elasticsearch.ElasticsearchTimeoutException: Timed out waiting for completion of [Task{id=10784056, type='transport', action='indices:data/write/update/byquery', description='update-by-query [.kibana_task_manager_8.4.0_001]', parentTask=unset, startTime=1662961449809, startTimeNanos=417241645675760}]
        at org.elasticsearch.tasks.TaskManager.waitForTaskCompletion(TaskManager.java:532) ~[elasticsearch-8.4.0.jar:?]
        at org.elasticsearch.action.admin.cluster.node.tasks.get.TransportGetTaskAction$1.doRun(TransportGetTaskAction.java:137) ~[elasticsearch-8.4.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769) ~[elasticsearch-8.4.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.4.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
        at java.lang.Thread.run(Thread.java:833) ~[?:?]

Regards,
Avinash

I tried to delete the security session index and getting below exception. But, We have all the super user privileges and also given these privileges to user [delete_index,manage,all].

We have changed permissions for the system indices in version 8.x . so this is expected. but let's hold this for now since you have shared some other logs (c.f. below).

Also, During the time of issue, we are seeing below exceptions in Elasticsearch nodes. Please go through them and see if this helps in further debugging the issue.

Yeah, this is what we have seen in some of the earlier logs - which would indicate some possible performance issues. This is not something we can troubleshoot without a support diagnostics bundle. This would require some engagement with our Support team through a subscription.

If you have stack monitoring enabled, perhaps you want to check the overall latency, CPU usage, JVM heap usage, etc. and check for signs of performance issues.

@ropc

We have enabled Stack monitoring and did not notice any issues with performance w.r.t CPU usage/ JVM heap usage. The usage is normal only.

We are using CPU: 8 core for Master and Hot tier nodes. For Warm, Cold tier CPU: 4core. Coming to memory provisioning, All nodes are assigned with 32G RAM.

Keeping the performance issue aside for now, System ideally should be accessible at least with slight network delay right. So, Can we please help us on further steps on deleting the security session index to rule out this case as per earlier inputs?

Regards,
Avinash

Without taking a deeper look at your environment (i.e support diagnostics bundle, Elasticsearch logs, Kibana logs), it will be extremely difficult to understand what could be going on. I advise you reach out to our Support team for further analysis.

@ropc ,

Is there any alternate way to delete system indices. We just wanted to delete some of the system indices to check further on the issue.

Regards,
Avinash

I would not recommend deleting system indices at it will likely create more problems @Avinash_09 .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.