I am monitoring sensor data with ML. Each time the sensor is replaced, the values jumps, that's something I am OK with. But after the jump, the model bounds are getting wider. I don't want this, and I want to restart from scratch.
My job is a partition_field_name one, means that the job is split by sensor. So I want to reset only 1 model, not the others
You cannot reset the model for an individual partition - only the job as a whole
Just so you know - the model should relatively quickly adapt to the new sensor's behavior. ML is designed for this so that there's less burden on model maintenance.
By the way, do you really only get sensor information once per day as your screenshot suggests?
Anyway, I do not find that the model adapts very quickly. Here is an other example. As you can see it took like for the model to 15 days to drop down and the accuracy is very low for seveal months from this moment.
However, even so, with only daily samples, adjusting to change still may seem like a long time (in terms of time), but in terms of the number of samples, it's actually not that much.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.