Elastic does not perform rollover automatically

Hi folks

I have following setup: filebeat sends logs from kubernetes cluster to logstash. Logstash sends them to elastic. These are the configurations of important components:

Index template:

 "filebeat-k8s-logs" : {
"order" : 1,
"index_patterns" : [
"settings" : {
  "index" : {
    "lifecycle" : {
      "name" : "ilm-filebeat",
      "rollover_alias" : "filebeat-k8s-logs"
    "mapping" : {
      "total_fields" : {
        "limit" : "10000"
    "refresh_interval" : "5s",
    "number_of_shards" : "3"}

ilm policy:

PUT _ilm/policy/ilm-filebeat
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_age": "30d",
            "max_size": "50gb"
          "set_priority": {
            "priority": 100
      "delete": {
        "min_age": "90d",
        "actions": {
          "delete": {}


  "settings": {
    "index": {
      "lifecycle": {
        "name": "ilm-filebeat",
        "rollover_alias": "filebeat-k8s-logs",
        "indexing_complete": "true"
      "mapping": {
        "total_fields": {
          "limit": "10000"
      "refresh_interval": "5s",
      "number_of_shards": "3",
      "provided_name": "<filebeat-k8s-logs-{now/d}-000004>"}

logsatsh config:

input {
  beats {
    port => 5022
    ssl => true
    ssl_certificate_authorities => ["/ca.crt"]
    ssl_certificate => "/client.crt"
    ssl_key => "/client.key"
    ssl_key_passphrase => "${LOGSTASH_KEY_PASS}"
    ssl_verify_mode => "force_peer"

output {
    elasticsearch {
	  hosts => ["https://elastic.whatever:9200"]
	  #index => "filebeat-k8s-logs"
	  ilm_enabled => true
	  ilm_rollover_alias => "filebeat-k8s-logs"
	  ilm_policy => "ilm-filebeat"
	  ilm_pattern => "{now/d}-000001"
	  ssl => true
	  ssl_certificate_verification => true
	  cacert => '/ca.crt'
	  user => logstash
	  password => "${LOGSTASH}"


Index does not get rolled over after 50gb automatically.

When I do:

POST filebeat-k8s-logs/_rollover 

the rollover works perfectly and a new index is getting created for example:


How can I achive automatic rollover? What am I doing wrong?

Many thanks!

Did you create an initial managed index according to documents?
Also rollover policies won't work on an existing index.

The very first time I created the logstash pipeline and restarted the process a new index appeared automatically: filebeat-k8s-logs-2020.11.19-000001 so I understood that I do not have to initialize it somehow.

I think you should create the initial index manually.

That would suprise me (or i do not understand something). The documentation says:

When you enable index lifecycle management for Beats or the Logstash Elasticsearch output plugin, the necessary policies and configuration changes are applied automatically. You can modify the default policies, but you do not need to explicitly configure a policy or bootstrap an initial index.

Also from the post you mentioned you can see that the user has done it the same way as I did and logstash "took care" of bootstrapping the initial index.

In any case, after I rolled over the index manually, the policy i selected in the template should automatically be applied because the index matches the pattern and that is also what happens. you can see that in the index data itself.

I think there must be something else other than missing initial index or I totaly miss something

Hi @Kosodrom,

a few questions:

  1. Which version of ES are you using.
  2. Is ILM enabled (xpack.ilm.enabled)?
  3. Is indices.lifecycle.poll_interval overridden to a large value?
  4. Can you show the output of the ILM explain API?
1 Like
  1. 7.6.1

  2. In the elasticsearch.yml it is not explicitly set to true since I assume it is enabled by default. In the logstash config it is enabled as you can see from my first post.

  3. indices.lifecycle.poll_interval is not set in the elasticsearch.yml

  4. This is the current index, which is expected to rollover at 50gb:

      "indices" : {
        "filebeat-k8s-logs-2021.01.18-000005" : {
          "index" : "filebeat-k8s-logs-2021.01.18-000005",
          "managed" : true,
          "policy" : "ilm-filebeat",
          "lifecycle_date_millis" : 1610958504404,
          "age" : "1.22d",
          "phase" : "hot",
          "phase_time_millis" : 1610958509673,
          "action" : "rollover",
          "action_time_millis" : 1610958925642,
          "step" : "check-rollover-ready",
          "step_time_millis" : 1610958925642,
          "phase_execution" : {
            "policy" : "ilm-filebeat",
            "phase_definition" : {
              "min_age" : "0ms",
              "actions" : {
                "rollover" : {
                  "max_size" : "50gb",
                  "max_age" : "30d"
                "set_priority" : {
                  "priority" : 100
            "version" : 2,
            "modified_date_in_millis" : 1605879139780

Hi @Kosodrom,

thanks, will you also supply the _stats for filebeat-k8s-logs, i.e., GET filebeat-k8s-logs/_stats?

GET filebeat-k8s-logs/_stats or GET filebeat-k8s-logs-2021.01.18-000005/_stats contains a lot of data and exceeds the allowed amount of characters.

Hi @Kosodrom,

can you upload it to:

then I can grab it from there.

I was notified that my link was referenced :slight_smile:

Since that time (Jan 2018) there are some ways that Elasticsearch does automatically bootstrap the index. Always take what we said in the past in perspective to what functionality was available at that time.

There are times when manual bootstrap is still very useful, for example, we have single logstash pipelines that write to multiple indices based on variable names, since logstash startup can't know all possibilities for those variables, no bootstrapping happens.

@rugenl Thank you so much for your feedback :). Yes for such use case I understand that you need to bootstrap the index.

But I still hope that elastic will implement a way so that you never need to do it manually.

Especially in the containerized environment (kubernetes, openshift) it limits you in different places as far as I understood. For example if you want to create different indexes for different namespaces, but at the same time you do not want to know about if there are namespaces created or deleted in future. Right now you end up having one index for all your platform logs in case you want to automate ILM, because you cannot use variables in the rollover alias. Same goes for pods and containers. Or if you want to create different ILM policies for different customeres (or maybe even let the customer decide the retention times for his logs/metrics on his own using annotations or tags) . Currently you have to do it manually and you have to be aware about the deployments and namespaces of your customers (what you actually don't want) so you can quickly create template, ilm policy and bootstrap the index manually, before they deploy. Or you have just one ILM policy for all your platform logs, which could drive you into discussions with your customers (the one wants to delete the logs after 5 days to save money, the other one wants to be compliant to something and needs to keep them for 90 days).

Looking forward to have more flexible ways of configuring ILM :slight_smile:



Hi @Kosodrom,

looking at the stats, it looks like none of the indices grew beyond 50GB before being rolled over. The present 00005 version is just 5GB. The previous indices are: 11GB, 28GB, 44GB, 32GB.

I wonder if part of the confusion is total index size (primaries+replicas) vs just size of primaries. Rollover only triggers based on primary size. So if you want it to trigger when primary+replica=50GB, you have to set the max size to 25GB (looks like you use 1 replica).

About the ILM bootstrap problem, using data streams solves that, available since 7.9. I am not 100% up to speed on logstash compatibility with data streams though, feel free to ask in the logstash topic for that.

1 Like

Thank you so much! I was just looking on the index size showed in the kibana monitoring! Somehow I must have skipped this part in the documentation!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.