ELK stack lifecycle management

I just took over an ELK stack app and I want to set up some proper index lifecycle management policies. From my understanding, I create an index lifecycle policy, and then I add/associate that policy with an index template. Going forward, indices generated using that template (e.g. logstash indices) would be managed according to the policy associated with said template. Is that correct?

Another question I have is, if I wanted to change the policy associated with an old index, how would I do that? I see that in the Elasticsearch console you can edit an indices settings (JSON) like this:

{
  "index.blocks.read_only_allow_delete": "false",
  "index.priority": "1",
  "index.query.default_field": [
    "*"
  ],
  "index.write.wait_for_active_shards": "1",
  "index.lifecycle.name": "logstash-policy",
  "index.lifecycle.rollover_alias": "logstash",
  "index.lifecycle.indexing_complete": "true",
  "index.routing.allocation.include._tier_preference": "data_content",
  "index.refresh_interval": "5s",
  "index.number_of_replicas": "1"
}

Could I simply add in the name of the lifecycle policy I wanted this index to use or is there another way to do this (change the lifecycle policy for an old logstash index)?

You could also make a request in Dev Tools in Kibana to change the policy name:

PUT index-name/_settings 
{
  "index": {
    "lifecycle": {
      "name": "policy-name"
    }
  }
}

Thanks for another approach. So I changed the policy, and most of the indices are now in the 'warm' phase, but none have been moved to the cold phase (frozen) or the delete phase even though they are well past the date prescribed in the policy. Why might that be the case?

What does your policy looks like?

You can get an explanation using the explain API on the indice.

GET index-name/_ilm/explain

Hmm I ran the command, but it doesn't seem to fully explain the policy. Anything older than 200 days should be in the cold phase and anything older than 365 in the delete phase.

Here is what it returns:

{
  "indices" : {
    "logstash-2022.08.20-000021" : {
      "index" : "logstash-2022.08.20-000021",
      "managed" : true,
      "policy" : "logstash-policy",
      "lifecycle_date_millis" : 1663602289648,
      "age" : "217.31d",
      "phase" : "warm",
      "phase_time_millis" : 1682347483938,
      "action" : "migrate",
      "action_time_millis" : 1682347485540,
      "step" : "check-migration",
      "step_time_millis" : 1682347486741,
      "step_info" : {
        "message" : "Waiting for all shard copies to be active",
        "shards_left_to_allocate" : -1,
        "all_shards_active" : false,
        "number_of_replicas" : 1
      },
      "phase_execution" : {
        "policy" : "logstash-policy",
        "phase_definition" : {
          "min_age" : "90d",
          "actions" : {
            "set_priority" : {
              "priority" : 50
            }
          }
        },
        "version" : 2,
        "modified_date_in_millis" : 1682347483852
      }
    }
  }
}

I figured out the issue. The number of replicas is set to 1, but I am only running a single instance of Elasticsearch, ie one node. This left all the indices in yellow health, so they were waiting to move to the next stage. I had to update number_of_replicas to 0. After that the indices were green. I'm not sure if the command below is really best practice considering it updates ALL indices...

PUT /_settings
{
  "index":{
    "number_of_replicas": 0
  }
}

You can update indices individually as well.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.