ILM Policy through logstash dynamic index name

Hello Team,

I am trying to implement the ILM Policy but I ran into an issue because my index name is dynamically created through logstash. Let me give you an explain of my setup and wat I am trying to do

  1. I create the template using the below API

    PUT _template/my_template
    {
    "index_patterns": ["logs*"],
    "settings": {
    "number_of_shards": 2,
    "number_of_replicas": 0,
    "index.lifecycle.name": "logs_policy",
    "index.lifecycle.rollover_alias": "logs"

    }
    }

  2. Then I created the index

    PUT logs-000001
    {
    "aliases": {
    "logs": {
    "is_write_index" : true
    }
    }
    }

  3. In my logstash setting I setup the below ilm policy settings

output {
elasticsearch {

    ilm_rollover_alias => "logs"
    ilm_pattern => "000001"
    ilm_policy => "logs_policy"

}
}

All the above worked but now I want to create logs for each module so the logs that I am trying to send have a column module_name. So in logstash I can setup index name dynamically and it will create logs_%{module_name} but if I do that I will have to setup ilm policy and templates for each of the modules right ?? Is there a better way to implement it?

New logstash for Dynamic index

filter {
json {
source => "message"
remove_field => ["message", "@version"]
}
mutate{
lowercase => ["module_name"]
}
}
output {
elasticsearch {
index => "logs_%{module_name}"
}}

An elasticsearch output can support sprintf references for the index name because in the bulk API each indexing request includes the index name, so it can be copied from the event. However, ILM is applied when the output makes its initial connection to elasticsearch and at that point there is no event that can be referenced.

Do you want the same policy for all of the indexes? If so, you may be able to apply the policy after the index is created.

Note that having a large number of small indexes is a negative for performance in elasticsearch.

thanks, @Badger for the quick response

Yes, I want the same policy to apply to all my indexes but when I use the ilm_rollover_alias in logstash as "logs" the rollover doesn't happen. In that case, what should be the alias settings?

I do not use elasticsearch, but I do not think the logstash elasticsearch output can support this. But why do you care? The rollover policy only matters when the index is rolled over. If it take 24 hours to do the rollover then if you have a cron job calls a curl statement to set the policy every 12 hours then the end result will be the same.

I realise that it feels much better to have the index configured better from the moment of its creation. I would prefer that too, but does it matter? You can beat ES to the punch (rollover) and set the policy before it takes effect.

Hello @Badger

I tried to implement the curl cmd using the below syntx,

curl -X PUT "localhost:9200/test-index?pretty" -H 'Content-Type: application/json' -d'
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.lifecycle.name": "my_policy"
}
}
'
but I am facing issues when I try to implement the policy. As it is needed to provide the "index_pattern" which is logs* in my case and the ilm_pattern is "000001" so when the logstash creates an index as log_module1 it is not getting picked and the rollover doesn't happen.

Can you explain the step-by-step follow I feel I am doing something wrong there?

Thanks a lot in advance.

Try using the method I linked to.

I tried the following steps and run into the below error

  1. Created the template and set ilm policy for that template

PUT _template/logs_template
{
"index_patterns": ["testlogs*"],
"settings": {
"number_of_shards": 2,
"number_of_replicas": 0,
"index.lifecycle.name": "logs_policy",
"index.lifecycle.rollover_alias": "testlogs"

}
}

  1. Edited the policy and set the doc count to like 10 to test the rollover.

  2. Sent the data using logstash to the index. index name "testlogs_dataseers".

  3. Applied the policy to the index after it was created

PUT testlogs_dataseers/_settings
{
"index": {
"lifecycle": {
"name": "logs_policy"
}
}
}

can I know what I am doing wrong here.

wait to hear back from the community.

waiting to hear back from the community.

First clean everything up.

Here is a sample that works

PUT _ilm/policy/license-data
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_size": "50gb",
            "max_age": "30d"
          }
        }
      }
    }
  }
}

PUT _index_template/license-data
{
  "index_patterns": [
    "license-data*"
  ],
  "template": {
    "settings": {
      "number_of_shards": 1,
      "lifecycle": {
        "name": "license-data",
        "rollover_alias": "license-data"
      }
    }
  }
}

Logstash Conf : simple-conf.conf

##################################
# Read License file
##################################
input {
  file {
    path => "/Users/sbrown/workspace/elastic-install/7.11.2/logstash-7.11.2/LICENSE.txt"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

output {
  # pump to stdout for debug
  
  stdout {codec => rubydebug}

  elasticsearch {
    hosts => ["localhost:9200"]
    ilm_rollover_alias => "license-data"
    ilm_pattern => "000001"
    ilm_policy => "license-data"
  }
}

Run logstash

sudo ./bin/logstash -r -f ./simple-file.conf

GET _cat/aliases/license-data?v
GET _cat/indices/license-data-000001
GET /license-data

Results

# GET _cat/aliases/license-data?v
alias        index               filter routing.index routing.search is_write_index
license-data license-data-000001 -      -             -              true

# GET _cat/indices/license-data-000001
yellow open license-data-000001 ZhlSamExR1KqTNdCNQUjCw 1 1 223 0 93.2kb 93.2kb

# GET /license-data
{
  "license-data-000001" : {
    "aliases" : {
      "license-data" : {
        "is_write_index" : true
      }
    },
    "mappings" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date"
        },
        "@version" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "host" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "message" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "path" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        }
      }
    },
    "settings" : {
      "index" : {
        "lifecycle" : {
          "name" : "license-data",
          "rollover_alias" : "license-data"
        },
        "routing" : {
          "allocation" : {
            "include" : {
              "_tier_preference" : "data_content"
            }
          }
        },
        "number_of_shards" : "1",
        "provided_name" : "<license-data-000001>",
        "creation_date" : "1618338466085",
        "number_of_replicas" : "1",
        "uuid" : "ZhlSamExR1KqTNdCNQUjCw",
        "version" : {
          "created" : "7110199"
        }
      }
    }
  }
}

With respect to dynamic index name you will need the policies and templates in place for all the combinations.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.