ILM policy setup for automated deployment system

Hello there,

I am working on the development of a monitoring application for a large scale distributed system and lately I am facing an elastic setup problem, hope you guys can help me out.

First of all, I have to point out that I have read all the available official documentations and blog posts in the matter. My experience is that all the 'hello world' kind of example described in these articles are easily applicable until you do not need to combine them. Moreover, the 'send manually request to a REST endpoint' approach is just simply not viable on a real enterprise system. I do not expect an out-of-box solution for my case, but I am just hoping that there is a thinking about automation and documenting good practices at Elastic (maybe already there are some useful guide and I just missed them).

That sad, here is my setup: I am using elastic version 7.2, all my components of the are deployed by helm into kubernetes pods, metricbeat, logstash and apm are sending data to elasticsearch (e.g. in example_data-2019.08.06-1 format) and I'd like to use custom ILM policies for optimizing resource usage.

The first problem comes when I initialize the whole setup (or redeploy elasticsearch), and beats are sent to the elastic node before I could 'install' the ILM policies with index rollover config via templates. Because of that, my custom configurations are not applied on the first set of indices.

Deleting the faulty indices manually solves the problem as the templates are applied at index creation time, new indices will have the ILM policy. The next issue is the chaos of aliases (and the rather confusing documentation). By the templates for each index pattern I am able to set the rollover_alias and write index/alias, but they are applied to every new index, so there would be multiple write index and the same rollover_alias would point to different indices. Is there an automated solution for configuring aliases and write_index?

Then when I manually set up ilm policies, templates and first indexes with aliases, I am facing the problem that rollover creates a new index, but it is empty and the docs are still written into the initial index (my guessing this is related to write_index still pointing to the original index). I believe I could achieve most of the possible IllegalArgumentExceptions (alias can point to multiple indexes, no write index, etc.), but I've never seen a stable setup. :slight_smile:

What I am looking for is a well described practice and ideas how to configure a stable system with the mentioned components and without running curl commands manually.

Thanks in advance!


1 Like


We recently implemented ILM in our cluster.

The process we followed was:

  1. Manually create a new index with a new alias, as described in the docs:
curl -X PUT "localhost:9200/datastream-000001?pretty" -H 'Content-Type: application/json' -d'
  "aliases": {
    "datastream": {
      "is_write_index": true

(you can also use date math if you need/want to have dates on your index name. See

  1. Create the ILM policy and add it to the correct index template. Then manually rollover the index that was just created so that the ILM policy apply to the new index (which will still be empty)/

  2. Change logstash to start writing into the new alias.

And with these steps we managed to have ILM rolling over indices and deleting them.

Let me know if you need more help.

Hi @antcas,

Thank you very much for your response! I am going to check your solution.

Although, there are still a lot of manually executed steps, which always a source for error.
Of course, your steps can be executed by another application, I am only surprised that there is no automation for this.

Anyways, thanks again!

I guess you can also create ILM policy (and add it to the proper index template) before creating the index with the alias and then you avoind the manual rollover part, but we didn't do that (probably because we forgot lol).

As far as I know, there's no way around manually creating a new index with a new alias.

As a summary, the initialization steps would be something like this (in a kubernetes cluster):

  1. deploy and start elasticsearch
  2. deploy metricbeat and other metrics, but they should wait for elasticsearch (maybe with using an initContainer that checks the status of elasticsearch)
  3. put ILM policies via API
  4. put index templates for linking ILM policies to index patterns via API
  5. put initial indices with proper naming and alias config via API
  6. elasticsearch status is green, start sending beats
  7. grab a beer

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.