We are getting below errors in kibana while performing the rolling-upgrade of elasticsearch and kibana.
We are doing this with full Ansible automation and testing it on virt environment (on prem) and upgrading elasticsearch first then kibana
Below are errors:-
Kibana Discovery Service couldn't update this node's last_seen timestamp. id: bd7cf1df-b553-4c7a-90bf-3517b6856978, last_seen: 2025-08-21T07:27:33.786Z, error:connect ECONNREFUSED 192.xxx.xxx.96:9200
Task actions_telemetry "Actions-actions_telemetry" failed: Error: [error_messages]: expected value of type [object] but got [Array]
error writing bulk events: "connect ECONNREFUSED 192.xxx.xxx.137:9200"; docs: [{"create":{}},{"@timestamp":"2025-08-21T07:51:29.150Z","event":{"provider":"eventLog","action":"stopping"},"message":"eventLog stopping","ecs":{"version":"1.8.0"},"kibana":{"server_uuid":"0384f5db-a239-46b9-9907-98d136ec1754","version":"8.18.4"}}]
Deleting current node has failed. error: connect ECONNREFUSED 192.xxx.xxx.137:9200
Error getting full task apm-source-map-migration-task-id:task during claim: Saved object [task/apm-source-map-migration-task-id] not found
Error getting full task Dashboard-dashboard_telemetry:task during claim: Saved object [task/Dashboard-dashboard_telemetry] not found
Error getting full task ProductDocBase:EnsureUpToDate:task during claim: Saved object [task/ProductDocBase:EnsureUpToDate] not found
Error getting full task apm-telemetry-task:task during claim: Saved object [task/apm-telemetry-task] not found
Kibana Discovery Service couldn't update this node's last_seen timestamp. id: 0384f5db-a239-46b9-9907-98d136ec1754, last_seen: 2025-08-21T08:40:39.324Z, error:Saved object index alias [.kibana_task_manager_8.18.4] not found: index_not_found_exceptionRoot causes:index_not_found_exception: no such index [.kibana_task_manager_8.18.4] and [require_alias] request flag is [true] and [.kibana_task_manager_8.18.4] is not an alias
Not that I know, as mentioned if this only happens during upgrade, this is not an issue, during upgrade the Elasticsearch nodes are being restarted, this can lead to some temporarily issues in communication between Kibana and Elasticsearch and also availability of some indices, which can lead to some errors and warnings being logged.
To not see them I think you would need to change the Kibana log level to FATAL, so only fatal errors would be logged, but I'm not sure if you can do that without restarting Kibana.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.