Maybe somebody can help me to solve the following problem
Problem description: If I create and start new multimetric job, then memory status always changes to hard limit. (This job worked on Windows without any problem)
OS: Ubuntu 16.04.5 LTS (GNU/Linux 4.15.18-1-pve x86_64)
Java: Oracle 1.8.0_181-b13 (x64)
Heap size: 4GB
Docs Count: 8431
Storage Size: 2.2mb
model_memory_limitI I tried different values from 12MB to 1200MB
job error message:
Job memory status changed to hard_limit at 83.7kb; adjust the analysis_limits.model_memory_limit setting to ensure all data is analyzed.
If I create another job on a same but bigger index (with 3 millon document) then the limit value is 69mb in the error message.
I didn't see any error message in the elastic log.
Error was replicated on another linux machine.