I have a pending task named update_tsdb_data_stream_end_times and don't know what it is or what effect it may have. Seems to be a task created from UpdateTimeSeriesRangeService.java. Any idea why this might've gotten stuck? Can this particular task being stuck cause a problem related to my other post Node fails but cluster holds no election and no failover occurs where there were pending tasks up to 24 hours prior to our cluster being unusable despite only one elasticsearch instance going down? This post here is a different cluster that also had a long running pending task. I'm trying to gain insight on what this is and see if it's possibly related.
/_cluster/pending_tasks
{
{
"insert_order": 50312,
"priority": "URGENT",
"source": "update_tsdb_data_stream_end_times",
"executing": false,
"time_in_queue_millis": 7296762,
"time_in_queue": "2h"
}
]
}
Same as the issue in the linked thread: in this environment, the elasticsearch logs are flooded with other, more recent tasks that are failing with ProcessClusterEventTimeoutException:
[2024-11-14T18:31:04,436][WARN ][rest.suppressed ] [<redacted node-1>] path: /designer-objects-ia/_settings, params: {master_timeout=30s, index=designer-objects-ia, timeout=30s}, status: 503
org.elasticsearch.transport.RemoteTransportException: [<redacted node-2>][<redacted-ip>][indices:admin/settings/update]
Caused by: org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (update-settings
Using jre17