Hi,
We are running ML on elastic cloud 6.7.
We're trying to run a few ml jobs but we're getting an out of memory error when submitting them via the API. In the current configuration that fails, we set a history of 1 month. If we run the same job with a history of 1 day the job opens and runs in real time.
My question is, if I run the job with 1 day of history, will the model's memory increase with time and it'll fail in 1 months time?
It seems that it is safe to assume that you'd likely encounter memory problems later on, but I think I'd need more info before saying things definitively, so I'd like to ask a few clarifying questions:
What is the exact error message that you're getting from the API. Can you copy/paste it here?
What is the detector configuration of the job you're submitting?
If the job configuration uses splitting, what is the cardinality of the fields used for splitting (as this affects the anticipated memory footprint of the job)
what is the size of the node that you're submitting the ML job to? How many other jobs are open or active on that node?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.