Getting errors in rolling-upgrade of elasticsearch and kibana (8.18.4 to 8.19.2)

Hi Team,

Hope you are doing well.

We are getting below errors in kibana while performing the rolling-upgrade of elasticsearch and kibana.

We are doing this with full Ansible automation and testing it on virt environment (on prem) and upgrading elasticsearch first then kibana

Below are errors:-

Kibana Discovery Service couldn't update this node's last_seen timestamp. id: bd7cf1df-b553-4c7a-90bf-3517b6856978, last_seen: 2025-08-21T07:27:33.786Z, error:connect ECONNREFUSED 192.xxx.xxx.96:9200

Task actions_telemetry "Actions-actions_telemetry" failed: Error: [error_messages]: expected value of type [object] but got [Array]

error writing bulk events: "connect ECONNREFUSED 192.xxx.xxx.137:9200"; docs: [{"create":{}},{"@timestamp":"2025-08-21T07:51:29.150Z","event":{"provider":"eventLog","action":"stopping"},"message":"eventLog stopping","ecs":{"version":"1.8.0"},"kibana":{"server_uuid":"0384f5db-a239-46b9-9907-98d136ec1754","version":"8.18.4"}}]

Deleting current node has failed. error: connect ECONNREFUSED 192.xxx.xxx.137:9200

Error getting full task apm-source-map-migration-task-id:task during claim: Saved object [task/apm-source-map-migration-task-id] not found

Error getting full task Dashboard-dashboard_telemetry:task during claim: Saved object [task/Dashboard-dashboard_telemetry] not found

Error getting full task ProductDocBase:EnsureUpToDate:task during claim: Saved object [task/ProductDocBase:EnsureUpToDate] not found

Error getting full task apm-telemetry-task:task during claim: Saved object [task/apm-telemetry-task] not found

Kibana Discovery Service couldn't update this node's last_seen timestamp. id: 0384f5db-a239-46b9-9907-98d136ec1754, last_seen: 2025-08-21T08:40:39.324Z, error:Saved object index alias [.kibana_task_manager_8.18.4] not found: index_not_found_exceptionRoot causes:index_not_found_exception: no such index [.kibana_task_manager_8.18.4] and [require_alias] request flag is [true] and [.kibana_task_manager_8.18.4] is not an alias

elasticsearch.yml

#
# Ansible managed: Do NOT edit this file manually!
#

cluster.initial_master_nodes:
- textlog1.example.com
- textlog2.example.com
- textlog3.example.com
cluster.name: demo-cluster
cluster.routing.allocation.disk.watermark.flood_stage.max_headroom: 50GB
cluster.routing.allocation.disk.watermark.high.max_headroom: 100GB
cluster.routing.allocation.disk.watermark.low.max_headroom: 300GB
cluster.routing.allocation.node_concurrent_recoveries: 5
cluster.routing.allocation.node_initial_primaries_recoveries: 4
discovery.seed_hosts:
- textlog1.example.com
- textlog2.example.com
- textlog3.example.com
http.port: 9200
indices.query.bool.max_clause_count: 4096
ingest.geoip.downloader.enabled: false
network.host: 192.xxx.xxx.xx
node.name: - textlog1.example.com
node.roles:
- master
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
search.max_buckets: 100000
xpack.security.authc:
  anonymous:
    authz_exception: true
    roles: monitor
    username: _anonymous
xpack.security.authc.realms:
  file:
    admin_fallback:
      order: 1
  native:
    user_store:
      order: 0
xpack.security.enabled: true
xpack.security.enrollment.enabled: false
xpack.security.http.ssl.certificate: /etc/elasticsearch/ssl/elasticsearch.crt
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/ssl/elasticsearch.key
xpack.security.http.ssl.verification_mode: certificate
xpack.security.transport.ssl.certificate: /etc/elasticsearch/ssl/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: /etc/elasticsearch/ssl/Root-G1.crt
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key: /etc/elasticsearch/ssl/elasticsearch.key
xpack.security.transport.ssl.verification_mode: certificate

Kibana.yml

#
# Ansible managed: Do NOT edit this file manually!
#

csp.strict: false
elasticsearch.hosts:
- https://192.xxx.xxx.96:9200
elasticsearch.password: admin123
elasticsearch.pingTimeout: 60000
elasticsearch.requestTimeout: 120000
elasticsearch.ssl.certificateAuthorities:
- /etc/kibana/ssl/Root-G1.crt
elasticsearch.ssl.verificationMode: certificate
elasticsearch.username: admin
logging:
  appenders:
    rolling-file:
      fileName: /var/log/kibana/kibana.json
      layout:
        type: json
      policy:
        size: 100mb
        type: size-limit
      strategy:
        max: 3
        pattern: -%i
        type: numeric
      type: rolling-file
  root:
    appenders:
    - default
    - rolling-file
migrations.discardCorruptObjects: 8.19.2
migrations.discardUnknownObjects: 8.19.2
monitoring.ui.ccs.enabled: false
server.host: 0.0.0.0
server.name: virtkibana1
server.port: 5601
server.publicBaseUrl: https://kibana.virt.test
server.ssl.certificate: /etc/kibana/ssl/kibana.crt
server.ssl.cipherSuites:
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
server.ssl.enabled: true
server.ssl.key: /etc/kibana/ssl/kibana.key
server.ssl.supportedProtocols:
- TLSv1.2
- TLSv1.3
xpack.encryptedSavedObjects.encryptionKey: vfdetpy5asdfghjk8pcuDh5ANuqU7HoZ
xpack.fleet.agents.enabled: false
xpack.fleet.enabled: false
xpack.infra.sources.default.fields.message:
- msg
xpack.reporting.enabled: true
xpack.reporting.encryptionKey: vfgtCGYzY87ypdy3NKK5Sasderfderzw7
xpack.reporting.queue.timeout: 15m
xpack.reporting.roles.enabled: false
xpack.security.audit.appender:
  fileName: /var/log/kibana/audit.json
  layout:
    type: json
  policy:
    interval: 24h
    type: time-interval
  strategy:
    max: 10
    type: numeric
  type: rolling-file
xpack.security.audit.enabled: true
xpack.security.encryptionKey: 3YCtpysdfg6huG3w8pcuDh5ANuqnhgft
xpack.security.secureCookies: false
xpack.security.session.idleTimeout: 1w
xpack.security.session.lifespan: 60d
xpack.task_manager.max_attempts: 5
xpack.task_manager.max_workers: 20
xpack.task_manager.poll_interval: 20000

When this error happpens? After you upgraded Elasticsearch or during the upgrade?

Is your automation upgrading Kibana at the same time or is it waiting for Elasticsearch to be upgraded before starting to upgrade Kibana?

1 Like

These error are getting during the elasticsearch upgradation.

Kibana is waiting to upgrade the elasticsearch and then only kibana starts its upgradation

Do they still happen after both Elasticsearch and Kibana are upgraded?

I think it is normal to have some errors messages during the upgrade process as nodes may be leaving the cluster and coming back.

If the errors do not persist after the upgrade, this is not an issue.

1 Like

Thanks for the quick response.

No, these errors are not persist after the upgrade.

is there any way to prevent this error during upgrade

Not that I know, as mentioned if this only happens during upgrade, this is not an issue, during upgrade the Elasticsearch nodes are being restarted, this can lead to some temporarily issues in communication between Kibana and Elasticsearch and also availability of some indices, which can lead to some errors and warnings being logged.

To not see them I think you would need to change the Kibana log level to FATAL, so only fatal errors would be logged, but I'm not sure if you can do that without restarting Kibana.

1 Like

Thanks for the update.

So , you mean to say add the below line in kibana.yml file to get rid of these errors during upgrade.

root:
  level: 
  - fatal