Correct usage of write indices for rolling over

Hello.
I collect logs from 10 components on dev, stage & prod envs.
I don't write to the same index, the indexes are created every day for every component and environment, like this: componentname_envYY.MM.DD or backend_prod22.02.18.
So, I don't need to rollover into rolled indices, I just need to move indices to warm and maybe closed phase after some time. Questions:

  • Why wizard for ILM policies doesn't allow me to pick "Close" action? Should I try and forcefully add it to my policy via POST request?
  • Please, explain to me how it looks with PUTting (creating) a write index in my case? I need to go around for every combination of component and environment and set today's (last created) index as the write index? or how it should work? I really don't understand it with those write indices, what's the idea? And why should I create "first" index, when there are existing ones, needing to be moved into warm/closed states?
    Can the Elasticsearch just use the next index that is created that matches the alias I have provided in the index template that I created for this policy to match the index and do the lifecycle procedures?
    Please, explain to me how do I pick which of indices to set as a write index.
    Thank you.

What version of the Stack are you using?

You can update a policy using the API, yes.
As for the second part of your question, I'm not sure how well using parts of ILM will work with your approach. If you keep your index approach, you are going to have to figure out a way to manage the aliases yourself for eg.

@warkolm I'm using v7.10.1. How do I add "close" action for "warm phase"? I only have shrink there now. The Kibana GUI only allows me to preview the HTTP request, but I can't edit it

You'll need to consult the ILM docs and update it in Dev Tools.

1 Like

@warkolm Unfortunately the docs do not list closing action in the list of ILM actions ...
If I try it anyways, I get error

{
  "error" : {
    "root_cause" : [
      {
        "type" : "named_object_not_found_exception",
        "reason" : "[26:11] unknown field [close]"
      }
    ],
    "type" : "x_content_parse_exception",
    "reason" : "[26:11] [put_lifecycle_request] failed to parse field [policy]",
    "caused_by" : {
      "type" : "x_content_parse_exception",
      "reason" : "[26:11] [lifecycle_policy] failed to parse field [phases]",
      "caused_by" : {
        "type" : "x_content_parse_exception",
        "reason" : "[26:11] [phases] failed to parse field [cold]",
        "caused_by" : {
          "type" : "x_content_parse_exception",
          "reason" : "[26:11] [phase] failed to parse field [actions]",
          "caused_by" : {
            "type" : "x_content_parse_exception",
            "reason" : "[26:11] [actions] failed to parse field [close]",
            "caused_by" : {
              "type" : "named_object_not_found_exception",
              "reason" : "[26:11] unknown field [close]"
            }
          }
        }
      }
    }
  },
  "status" : 400
}

Looks like ILM does not support closing of indices. This used to be a good way to conserve resources, but is generally not very useful in more recent versions that has seen improvements. I would recommend you upgrade to the latest version to benefit from these improvements.

1 Like

Dear Christian and Mark, thank you for helping me understand what a useless mess this "stack" is, I will use a cron job and API calls via Python requests library to call the closing API method.
I am surprised the Elastic software is designed this way, but hey, at least I can make it work for me. Beams of love to everyone <3

It is generally not recommended to close indices, especially not in light if the improvements in recent versions. This is why I suspect ILM does not support it. What are you looking to achieve by closing indices?

@Christian_Dahlqvist I have only 100 shards available, and I'm writing 10 * 3 (dev+stage+prod) = 30 indices every day (look at orig. post). Because I'm creating a new index every day, I need to take care of older indices, but since I can't close them via ILM policy, my only option except what I described with cron and API calls would be setting delete action. I can always delete stuff manually if I run out of space, this topic is not about running out of space, it's about Elastic not being able to create new indices for me because there are no free shards at some point

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.