We have multiple clusters running autonomously and we've created ML jobs within a Dev environment. I am admittedly a real newbie with ML and AI, but my understanding is the algorithms are continuously updating based on analyzed data for better accuracy. I see the blue areas showing upper and lower expectations, so I believe it is learning more as time progresses, correct?
If that's the case, how can I migrate my "experienced" algorithms from the Dev environment to a Production one without restarting the training? In other words, where do the algorithms live? Along with that, I would hope the process would allow me to backup/archive existing, "educated" jobs so I don't have to start over in the event of a rebuild. The JSON for existing jobs has parameters and configuration info, but I don't see any algorithm information and certainly no history of what it has already learned.
I know I'm really new at ML so if anyone has suggestions, please let me know. Thank you!