TamasK  
                
               
                 
                 
              
                  
                    October 3, 2018, 11:00am
                   
                   
              1 
               
             
            
              Hi,
Maybe somebody can help me to solve the following problem
Problem description:  If I create and start new multimetric job, then memory status always changes to hard limit . (This  job worked on Windows without any problem)
OS:   Ubuntu 16.04.5 LTS (GNU/Linux 4.15.18-1-pve x86_64) 
Memory: 12GB 
Elastic:  6.4.1 
Java:  Oracle 1.8.0_181-b13 (x64) 
Heap size: 4GB 
vm.max_map_count=262144
Index: 
Docs Count: 8431 
Storage Size: 2.2mb
Job: 
established_model_memory:143.2 KB 
model_memory_limitI I tried different values from 12MB to 1200MB
job error message:  
Job memory status changed to hard_limit at 83.7kb; adjust the analysis_limits.model_memory_limit setting to ensure all data is analyzed.
 
If I create another job on a same but bigger index (with 3 millon document) then the limit value is  69mb in the error message.
I didn't see any error message in the elastic log.
Error was replicated on another linux machine. 
             
            
               
               
               
            
            
           
          
            
            
              Can you provide the following info on your ML Job?
GET _xpack/ml/anomaly_detectors/yourjobnamehere/_stats?pretty
 
             
            
               
               
               
            
            
           
          
            
            
              Thanks - sorry to ask again, but actually I really wanted to get the full job details, so can you please rerun
GET _xpack/ml/anomaly_detectors/dev2/?pretty
 
(this is, without the _stats part)
             
            
               
               
               
            
            
           
          
            
            
              Thank you - can you also tell me the approximate cardinality of the field ve_itemno.keyword?
GET yourindexname/_search
{
  "size": 0,
  "aggs": {
    "cardinality": {
      "cardinality": {
        "field": "ve_itemno.keyword"
      }
    }
  }
}
 
             
            
               
               
               
            
            
           
          
            
              
                TamasK  
                
               
              
                  
                    October 3, 2018,  3:06pm
                   
                   
              7 
               
             
            
              {
  "took": 2,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 8431,
    "max_score": 0,
    "hits": []
  },
  "aggregations": {
    "cardinality": {
      "value": 5
    }
  }
} 
             
            
               
               
               
            
            
           
          
            
            
              Thanks for supplying the info - I'll discuss this with others internally and hopefully get back to you soon.
             
            
               
               
               
            
            
           
          
            
              
                TamasK  
                
               
              
                  
                    October 3, 2018,  4:48pm
                   
                   
              9 
               
             
            
              Some information, that can help your investigation. My original index has 3 million documents where ve_itemno cardinality is 16141.
I wanted to create a job, that proccess only the small part of documents.
1. solution attempt 
I created an advanced job and selected documents with a query
2. solution attempt 
I created a new smaller index form the original  with the reindex command and then I used a  multimetric job.
Both solution worked on Windows, but failed on Linux
             
            
               
               
               
            
            
           
          
            
              
                TamasK  
                
               
              
                  
                    October 4, 2018,  7:39am
                   
                   
              10 
               
             
            
              A new small index  was created normal way (PUT and logstash), but I received the same hard limit error message.
             
            
               
               
               
            
            
           
          
            
              
                droberts195  
                (David Roberts)
               
              
                  
                    October 17, 2018, 10:42am
                   
                   
              11 
               
             
            
              We believe the bug here is that hard limit can be incorrectly triggered too soon when the bucket span is 1 day or longer.  We have made a change  that should resolve this problem.
             
            
               
               
              1 Like 
            
            
           
          
            
              
                system  
                (system)
                  Closed 
               
              
                  
                    November 14, 2018, 10:42am
                   
                   
              12 
               
             
            
              This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.