Restoring ML Jobs using Elasticdump

I'm currently working on an Elasticsearch SDLC, and would like to store the machine learning jobs into version control so they can be restored, tracked, etc.. I have been doing the same with dashboards and watches using Elasticdump to export/restore the respective indices. When I try to do the same with the .ml-state and .ml-anomalies-shared indices, I do not see the jobs appearing in the Machine Learning tab, or even when I query the ML APIs. Is there a way this can be done?

I even stopped datafeeds and closed the jobs before exporting, to no avail. I do not want to use the snapshot/restore feature because it isn't as easy to version control/package.

Thanks

Currently the ML job definitions are not stored in an index (like the state and result docs are). They are stored in the cluster state.

Okay, thanks Rich. What options do I have to restore the job state? Only the snapshot/restore?

Unfortunately, there is no currently supported one-step mechanism to export jobs, models, and results from one system to another. There are several enhancement requests open to provide this capability in the future.

The only "recommended" option is to export the job and datafeed config's via the API:

_xpack/ml/anomaly_detectors/${JOB_ID}?pretty
_xpack/ml/datafeeds/datafeed-${JOB_ID}?pretty

Then, refactor and "PUT" those configs on the new cluster. From there you'd need to rebuild the models and results on the raw data on the new cluster.

1 Like