Request Timeout Errors Observed After Upgrade from 9.3.0 to 9.3.1 – Timeout Configuration Clarification

Elastic Stack > Elasticsearch category for assistance.

Title:

Request Timeout Errors Observed After Upgrade from 9.3.0 to 9.3.1 – Timeout Configuration Clarification

Descriptions:

We are observing repeated Request timed out errors in Kibana after upgrading the stack from version 9.3.0 to 9.3.1.

These errors occur during core operations such as: Cluster health checks, Background task scheduling, ILM policy creation.

We would like to understand whether:

  1. These errors are expected during or after the upgrade process

  2. There are any configurable timeout parameters in Elasticsearch or Kibana that should be tuned

Error Logs

07:17:08  2026-03-29T23:38:00.051Z | Error loading the cluster health. The task poller will start regardless. Error: Request timed out

07:17:08  2026-03-29T23:38:00.070Z | Error scheduling delete_inactive_background_task_nodes task, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | Error scheduling invalidate_api_keys task, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | Error scheduling mark_removed_tasks_as_unrecognized task, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | Error scheduling maintenance-window:generate-events-generator, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | Error scheduling Alerting-alerting_health_check, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | Error scheduling Alerts-alerts_invalidate_api_keys task, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | [task osquery:telemetry-packs:1.1.0]: error scheduling task, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | [task osquery:telemetry-configs:1.1.0]: error scheduling task, received Request timed out

07:17:08  2026-03-29T23:38:00.070Z | Error creating ILM policy: Request timed out

07:17:08  2026-03-30T01:10:44.261Z | Error loading the cluster health. The task poller will start regardless. Error: Request timed out

07:17:08  2026-03-30T01:10:44.279Z | Error scheduling delete_inactive_background_task_nodes task, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | Error scheduling invalidate_api_keys task, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | Error scheduling mark_removed_tasks_as_unrecognized task, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | Error scheduling maintenance-window:generate-events-generator, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | Error scheduling Alerting-alerting_health_check, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | Error scheduling Alerts-alerts_invalidate_api_keys task, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | [task osquery:telemetry-packs:1.1.0]: error scheduling task, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | [task osquery:telemetry-configs:1.1.0]: error scheduling task, received Request timed out

07:17:08  2026-03-30T01:10:44.279Z | Error creating ILM policy: Request timed out


Questions:

  1. Are these timeout errors expected during or after upgrading from 9.3.0 to 9.3.1?
  2. Could this indicate a known issue with this version?
  3. Which timeout-related parameters can be tuned in Kibana and Elasticsearch (e.g., request timeout, shard timeout, task manager settings)?
  4. Are there any recommended configurations or best practices to avoid such timeouts during upgrades?

Are these errors stil happening or was this during upgrade?

During upgrading, depending on your cluster configuraiton, some things will not work as expected as services are being stopped and restarted, so you should expect things like this.

We dont know when you did the upgrade - these errors are there after the upgrade was completed, or during the upgrade process only?

Can you also check the upgrade was completed successfully on all nodes, via

GET _cat/nodes?v&h=name,role,version,master,u

in DevTools, or via curl, sharing the result

Yes, this error was during upgrade process. Could you please provide a hint if this is related to any service?

This is normal behavior, as mentioned during upgrades sevices are being restarted and those are temporary timeouts, this will also depend on your cluster, if you have redundancy or not, how you upgraded etc.

It is not clear from where this log is from, if it is from Kibana than it had issues connecting to your cluster during the upgrade.

Yes these errors are coming from only kibana.log file. As you mentioned, this seems to be a connection issue. What can we do in this situation? Are there any parameters we need to change in the Elasticsearch or Kibana configuration files?

There is not much you can do, as mentioned, during upgrade this can happen.

What is your Kibana configuration? How many elasitcsearch hosts did you configure in Kibana? If you just have one host, it may have issues to connect to this host during the upgrade.

Even if you have multiple hosts it may have some timeout when one of then is offline.

I don't see what is the issue here, during an upgrade process when a service is being restarted the communication with this service may fail until it is back up online.