In one of our ML jobs. There doesnt appears to be any errors or warning in the job message.
However the latest_record_timestamp is lagging behind the current_timestamp
KIndly advice how we can further look into this.
###Counts
job_id pred_maint-ABCDEF-deny-high-count
processed_record_count 46,880,506,574
processed_field_count 348,373,955,401
input_bytes 18.4 TB
input_field_count 348,373,955,401
invalid_date_count 0
missing_field_count 26,670,097,191
out_of_order_timestamp_count 0
empty_bucket_count 2
sparse_bucket_count 2
bucket_count 3,275
earliest_record_timestamp 2026-02-11 11:21:59
latest_record_timestamp 2026-03-17 11:14:06
last_data_time 2026-03-20 16:43:29
latest_empty_bucket_timestamp 2026-02-11 12:00:00
latest_sparse_bucket_timestamp2026-03-16 03:45:00
input_record_count 46,880,506,574
log_time 2026-03-20 16:43:29
latest_bucket_timestamp 2026-03-17 09:45:00
###Model size stats
job_id pred_maint-ABCDEF-deny-high-count
result_type model_size_stats
model_bytes 124.1 MB
peak_model_bytes 130.8 MB
model_bytes_exceeded 0.0 B
model_bytes_memory_limit 512.0 MB
total_by_field_count 103
total_over_field_count 0
total_partition_field_count 102
bucket_allocation_failures_count 0
memory_status ok
assignment_memory_basis current_model_bytes
output_memory_allocator_bytes 29363
categorized_doc_count 0
total_category_count 0
frequent_category_count 0
rare_category_count 0
dead_category_count 0
failed_category_count 0
categorization_status ok
log_time 2026-03-20 13:45:30
timestamp 2026-03-17 10:00:00
###Job timing stats
job_id pred_maint-ABCDEF-deny-high-count
bucket_count 2,783
total_bucket_processing_time_ms 720,306
minimum_bucket_processing_time_ms 20
maximum_bucket_processing_time_ms 1,820
average_bucket_processing_time_ms 258.824
exponential_average_bucket_processing_time_ms 294.209
exponential_average_bucket_processing_time_per_hour_ms 824.883