ILM waiting for allocation

I have several days of indexes stopped at this step. However I'm unable to determine why exactly ILM won't progress. Advice on where I could look to find the mis-configuration would be appreciated.

Excerpt from
GET myindex/_ilm/explain

  "step_info" : {
    "message" : "Waiting for [1] shards to be allocated to nodes matching the given filters",
    "shards_left_to_allocate" : 1,
    "all_shards_active" : true,
    "actual_replicas" : 0

Hi Tripodal,

In this case, it looks like your index has some allocation settings that aren't satisfied right now, so a couple of questions:

  • Can you post the entire explain output for this index as well as showing us the policy?
  • Is the index green or does it have unassigned replicas or primaries?
  • What are the index's settings (GET /myindex/_settings)? This will give us a better idea of if there are any allocation settings preventing the allocation from working.

The index is green. The ILM functionality was initially working; if that helps. I feel like this may be a configuration error, or a result of us adjusting the policy after its creation.

explain output

   "indices" : {
     "securitylog-6.5.3-000013" : {
       "index" : "securitylog-6.5.3-000013",
       "managed" : true,
       "policy" : "SecurityLogsILM",
       "lifecycle_date_millis" : 1555302289177,
       "phase" : "warm",
       "phase_time_millis" : 1555620708157,
       "action" : "allocate",
       "action_time_millis" : 1555302290521,
       "step" : "check-allocation",
       "step_time_millis" : 1555620708157,
       "step_info" : {
         "message" : "Waiting for [1] shards to be allocated to nodes matching the given filters",
         "shards_left_to_allocate" : 1,
         "all_shards_active" : true,
         "actual_replicas" : 0
       },
       "phase_execution" : {
         "policy" : "SecurityLogsILM",
         "phase_definition" : {
           "min_age" : "0ms",
           "actions" : {
             "allocate" : {
               "include" : { },
               "exclude" : { },
               "require" : {
                 "box_type" : "hot"
               }
             },
             "forcemerge" : {
               "max_num_segments" : 1
             },
             "set_priority" : {
               "priority" : 50
             }
           }
         },
         "version" : 4,
         "modified_date_in_millis" : 1555620704591
       }
     }
   }
}

Settings outpout

 {
   "securitylog-6.5.3-000013" : {
     "settings" : {
       "index" : {
         "mapping" : {
           "total_fields" : {
             "limit" : "10000"
           }
         },
         "refresh_interval" : "5s",
         "translog" : {
           "sync_interval" : "5s",
           "durability" : "async"
         },
         "provided_name" : "securitylog-6.5.3-000013",
         "frozen" : "false",
         "creation_date" : "1555263287825",
         "priority" : "50",
         "number_of_replicas" : "0",
         "uuid" : "3uRyo267T460aAC-obojgg",
         "version" : {
           "created" : "6070199"
         },
         "lifecycle" : {
           "name" : "SecurityLogsILM",
           "rollover_alias" : "securitylogs-6.5.3",
           "indexing_complete" : "true"
         },
         "routing" : {
           "allocation" : {
             "require" : {
               "box_type" : "warm"
             }
           }
         },
         "search" : {
           "throttled" : "false"
         },
         "number_of_shards" : "1"
       }
     }
   }
 }

The target not space was above the 85% watermark, var/log/elastic had the warning once i went beyond tailing the log.

It would be nice if ilm indicated that as the reason.

We do have an API for that, the cluster allocation explain API.