Greetings.
I have implemented index lifecycle management on beats.  Here's the configuration we are using:
PUT /_ilm/policy/beat_default_lifecycle_policy
{
    "policy": {
        "phases": {
        "hot": {
            "min_age": "0ms",
            "actions": {
                "rollover": {
                    "max_size": "5gb"
                },
                "set_priority": {
                    "priority": 100
                }
            }
        },
        "delete": {
            "min_age": "7d",
            "actions": {
                "delete": {}
            }
        }
    }
}
}
 
So...  If an index is over 5GB it should be deleted in 7 days.  The indexes are not, however, being deleted.
One index this ILM should apply to is packetbeat-7.3.1-2019.09.10-000001 .  Here are some of the settings;
{
  "index.blocks.read_only_allow_delete": "false",
  "index.priority": "1",
  "index.write.wait_for_active_shards": "1",
  "index.lifecycle.name": "beat_default_lifecycle_policy",
  "index.lifecycle.rollover_alias": "packetbeat-7.3.1",
  "index.mapping.total_fields.limit": "10000",
...
}
 
It's well over 5GB.  Shouldn't this index be deleted by ES?
Thank you.
             
            
               
               
               
            
            
           
          
            
              
                dakrone  
                (Lee Hinman)
               
              
                  
                    December 9, 2019, 10:21pm
                   
                   
              2 
               
             
            
              The size for rollover is the size of the primary, I don't know whether the screenshot's "Storage size" is just the primary or whether it factors in the size of the replica as well.
You can check with:
GET /_cat/shards/packetbeat-7.3.1-2019.09.10-000001?v
 
You can also see ILM explanation in:
GET /packetbeat-7.3.1-2019.09.10-000001/_ilm/explain?human
 
Which would be helpful to see.
             
            
               
               
              1 Like 
            
            
           
          
            
            
              Thank you for the reply.  Interesting...
When I run that first command it outputs
index                              shard prirep state          docs store ip        node 
packetbeat-7.3.1-2019.09.10-000001 0     p      STARTED    21088072 7.9gb 127.0.0.1 serverName 
packetbeat-7.3.1-2019.09.10-000001 0     r      UNASSIGNED
 
             
            
               
               
               
            
            
           
          
            
              
                dakrone  
                (Lee Hinman)
               
              
                  
                    December 9, 2019, 10:49pm
                   
                   
              4 
               
             
            
              Okay it does look like it should be large enough to roll over.
What was the output of the second command?
             
            
               
               
               
            
            
           
          
            
            
              That one shows an issue - but I'm not sure how to address it or if it's causing the problem.
{ 
"indices" : { 
"packetbeat-7.3.1-2019.09.10-000001" : { 
"index" : "packetbeat-7.3.1-2019.09.10-000001", 
"managed" : true, 
"policy" : "beat_default_lifecycle_policy", 
"lifecycle_date" : "2019-09-10T17:38:41.521Z", 
"lifecycle_date_millis" : 1568137121521, 
"age" : "90.21d", 
"phase" : "hot", 
"phase_time" : "2019-09-10T17:38:41.609Z", 
"phase_time_millis" : 1568137121609, 
"action" : "rollover", 
"action_time" : "2019-09-10T17:40:02.228Z", 
"action_time_millis" : 1568137202228, 
"step" : "ERROR", 
"step_time" : "2019-09-26T02:40:08.666Z", 
"step_time_millis" : 1569465608666, 
"failed_step" : "check-rollover-ready", 
"step_info" : { 
"type" : "master_not_discovered_exception", 
"reason" : null, 
"stack_trace" : """MasterNotDiscoveredException[null] 
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$3.onTimeout(TransportMasterNodeAction.java:251) 
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:325) 
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) 
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:572) 
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688) 
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
at java.base/java.lang.Thread.run(Thread.java:835) 
""" 
}, 
"phase_execution" : { 
"policy" : "packetbeat-7.3.1", 
"phase_definition" : { 
"min_age" : "0ms", 
"actions" : { 
"rollover" : { 
"max_size" : "50gb", 
"max_age" : "30d" 
} 
} 
}, 
"version" : 1, 
"modified_date" : "2019-09-10T17:38:41.280Z", 
"modified_date_in_millis" : 1568137121280 
} 
} 
} 
}
 
             
            
               
               
               
            
            
           
          
            
              
                dakrone  
                (Lee Hinman)
               
              
                  
                    December 9, 2019, 10:54pm
                   
                   
              6 
               
             
            
              You should be able to retry this with:
POST /packetbeat-7.3.1-2019.09.10-000001/_ilm/retry
 
This should retry the step since it's in an error state.
In later versions of ES we've added automatic retry for some steps.
             
            
               
               
              1 Like 
            
            
           
          
            
            
              That operation was successful.  Thank you.
             
            
               
               
               
            
            
           
          
            
              
                system  
                (system)
                  Closed 
               
              
                  
                    January 6, 2020, 10:57pm
                   
                   
              8 
               
             
            
              This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.