Some questions about Uptime ML Integration


Really like the Uptime ML integration, but:

  • Requiring us to enable ML for each uptime monitor separately is kind of a lot of manual work and clicking / verifying when you have 100's of monitors.

Is there a way to enable the ML integration for all monitors (or all monitors with a certain tag) at once?

  • Currently the Uptime ML integration is one of the few where it is not possible to set a prefix for the job. The result is the uptime ml jobs kind of clutter up the ML job management overview. Again, I could clone each job separately and configure a prefix myself, but this too is a long manual process.

Pls let me know the above features are worthy of a GH feature request and I'll make them.



@willemdh i am glad you like it. Yes we are looking for some feedback on the feature actually, and would be glad to consider a GH feature request.

Both the features, enabling job for all monitors and setting prefix are interesting and we have considered them, it's just a matter of priority.





More questions :slight_smile:

I seem unable to clone the uptime jobs? Datafeed is stopped and job is closed. Is this expected?

Other stopped / closed jobs are clonable. How is the relationship between Uptime and their ML jobs created? Could I manually create an ml job which is linked to an uptime monitor?



The most common reason for not being able to clone an ML job is that we cannot find a matching kibana index pattern.

Can you create an index pattern for heartbeat-* manually, i mentioned the method in another issue. Missing index pattern heartbeat-*

After that you should be able to clone it. Btw why you want to clone it? if job is stopped, you can delete it and recreate it from uptime UI again.

@shahzad31 I'm trying to finetune these uptime ml jobs a bit and giving them the correct prefix in the meantime. Also I might try to script creation of the Uptime ML jobs. I can clone other (non-uptime- jobs without an issue. I definitely have a heartbeat-* index pattern, so that shouldn't be the issue.

How does Uptime know there is an ml job for the related uptime monitor?

I will have to take a look at code to give you a definite answer but I think uptime monitor Id is used in ml job id.

@willemdh ok so monitorId is inserted in prefix

Hmm the id of the ml job seems to be the name with https stripped and dots replaced with underscores.

For example one monitor:

has begraafplaatsen_stad_gent_high_latency_by_geo

as ml job id.

Does this imply I cannot create an Uptime ML job with a custom prefix or the Uptime monitor would never pick it up?

Still can't clone, is it possible the Uptime ML jobs have been made 'immutable' in some way?

Yes you are right, it's not a simple monitorID there is a special function which formats it a bit to remove special characters etc

I don't know how much JavaScript is you know this might be helpful

Also job should be created inside uptime ml module in ml app. Because that's how it queries the jobs and then matches job id against formatting monitorId again

1 Like

Thanks for clarifying @shahzad31

Also job should be created inside uptime ml module in ml app

I guess automating the process won't be easy if this needs to be done with Javascript from inside uptime ml module. I was hoping to just use the rest api

They way it's currently implemented is not very automatable.. :upside_down_face:

Which imho is kind of weird in these times.


No i didn't meant to say that @willemdh i just meant that it should use preconfigured uptime ml job

@willemdh i think i figured out a reason why you can't clone uptime jobs, you need to create a more specific index pattern for heartbeat. Like it should be heartbeat-7* instead of heartbeat-*

let me know if that helps, also check this jsfilled

you can use this to get ml job id from monitorid, that's what uptime ui will expect as ML Job Id.
Just end monitor name at end like 'android-homepage' and hit run.

Also once you clone job, make sure to add these filters for a specific monitor data, otherwise job will run on whole index

    filter: [
      { term: { '': monitorId } },
      { range: { '': { gt: 0 } } },


1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.