No rollover with data stream lifecycle

For testing purposes I’ve configured a lifecycle on a data stream with a retention time of 1h. But the backing index does not get rolled over.

According to this doc Data stream lifecycle settings in Elasticsearch | Reference with the default setting of cluster.lifecycle.default.rollover the rollover should happen if:

  • Either any primary shard reaches the size of 50GB,

  • or any primary shard contains 200.000.000 documents

  • or the index reaches a certain age which depends on the retention time of your data stream,

  • and has at least one document.

So in my case I would expect a rollover after 1h.

If I explain the lifecycle on the data stream I see that the lifecycle is properly configured on the backing index:

GET logs-prod2-nlc/_lifecycle/explain
{
  "indices": {
    ".ds-logs-prod2-nlc-2025.09.22-000001": {
      "index": ".ds-logs-prod2-nlc-2025.09.22-000001",
      "managed_by_lifecycle": true,
      "index_creation_date_millis": 1758554925361,
      "time_since_index_creation": "21.35h",
      "lifecycle": {
        "enabled": true,
        "data_retention": "1h",
        "effective_retention": "1h",
        "retention_determined_by": "data_stream_configuration"
      }
    }
  }
}

I also checked that prefer_ilm is set to false in the backing index settings:

{
  "settings": {
    "index": {
      ...
      "number_of_replicas": "0",
      "uuid": "faU5gfEnQgmZsukYd6O-AA",
      "version": {
        "created": "9033000"
      },
      "lifecycle": {
        "name": "logs",
        "prefer_ilm": "false"
      },

So why is there no rollover? What am I missing?

Are there any errors in the logs of any of your elasticsearch nodes? Maybe rollover is failing for some reason? If you don’t see any errors in any logs, I would check the health of the cluster (GET _health_report). And if the reported health is green, I would probably turn on trace-level logging for org.elasticsearch.datastreams.lifecycle.DataStreamLifecycleService and org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction (this will be verbose, so I don’t recommend it in production, at least not for very long).

For what it’s worth, I just tried this out by copy/pasting the commands in the data stream lifecycle tutorial, and it worked as you would expect. The only change I made was that I reduced the retention from 7d to 1h. Shortly after the 1h mark, my write index rolled over.

Thanks, I will check the TRACE log.

Meanwhile it rolled over - but probably for other reasons that I’d expected:

Desired balance computation for [37] is still not converged after [3.2h] and [1] iterations
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-logs-prod2-nlc-2025. 09.23-000002][0]]]).
Data stream lifecycle successfully rolled over datastream [logs-prod2-nlc] due to the following met rollover conditions [[max_age: 1 d], [min_docs: 1]]. The new index is [.ds-logs-prod2-nlc-2025.09.23-000002]
[.ds-logs-prod2-nlc-2025.09.23-000002/gjKaOODPRvGWUCdaH8qjVQ] update_mapping [_doc]
[.ds-logs-prod2-nlc-2025.09.23-000002/gjKaOODPRvGWUCdaH8qjVQ] update_mapping [_doc]
[.ds-logs-prod2-nlc-2025.09.23-000002/gjKaOODPRvGWUCdaH8qjVQ] update_mapping [_doc]
Data stream lifecycle service successfully updated settings [[index.merge.policy.merge_factor, index.merge.policy.floor_segment]] for index index [.ds-logs-prod2-nlc-2025.09.22-000001]
Data stream lifecycle is issuing a request to force merge index [.ds-logs-prod2-nlc-2025.09.22-000001]
Data stream lifecycle successfully force merged index [.ds-logs-prod2-nlc-2025.09.22-000001]
[.ds-logs-prod2-nlc-2025.09.23-000002/gjKaOODPRvGWUCdaH8qjVQ] update_mapping [_doc]

It looks like the rollover happened after 1d.

Also is it normal to see a force merge? The docs say that it should rather do a “tail merge” (whatever that exactly means)…

Could it be that I still see some remains of the previously configured ILM policy on the index (even though prefer_ilm is false now)?

Yes that is normal. The tail merge that data stream lifecycles do is just a force merge that is optimized for runtime performance – it merges segments but not down to a single segment.

I don’t think so – everything in that log looks data-stream-lifecycle-related. Is there any chance you changed your retention after the data stream was created, and elasticsearch did not correctly update the rollover to reflect that change? Here are the rollover rules in a comment in the code: elasticsearch/server/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverConfiguration.java at main · elastic/elasticsearch · GitHub .

Yes, that’s very well possible. Could be that I started with 1d and then changed my mind and set it to 1h. The rollover still happens after 1d now.

So would you consider this a bug? Anything I should try to boil it down further?

Probably not related but I also noticed something curious: The index suffix went from 000002 to 000004 and skipped 000003:

"Data stream lifecycle is issuing a request to force merge index [.ds-logs-prod2-nlc-2025.09.23-000002]"
"Data stream lifecycle successfully force merged index [.ds-logs-prod2-nlc-2025.09.23-000002]"
"Updating cluster state with force merge complete marker for .ds-logs-prod2-nlc-2025.09.23-000002"
"Updated cluster state for force merge of index [.ds-logs-prod2-nlc-2025.09.23-000002]"
"Clearing recorded error for index [.ds-logs-prod2-nlc-2025.09.23-000002] because the [indices:admin/forcemerge] action was successful"
"Data stream lifecycle issues rollover request for data stream [logs-prod2-nlc]"
"Already force merged .ds-logs-prod2-nlc-2025.09.23-000002"
"auto sharding result for data stream [logs-prod2-nlc] is [No recommendation as auto-sharding not enabled]"
"Clearing recorded error for index [.ds-logs-prod2-nlc-2025.09.24-000004] because the [indices:admin/rollover] action was successful"
"Data stream lifecycle issues rollover request for data stream [logs-prod2-nlc]"
"Already force merged .ds-logs-prod2-nlc-2025.09.23-000002"
"auto sharding result for data stream [logs-prod2-nlc] is [No recommendation as auto-sharding not enabled]"

In the data stream it also claims that the generation is 5 now. But I only count 3 rollovers so far.

Yeah that would be a bug. Can you reproduce it? If I have time, I’ll try to reproduce it today.

That’s perfectly normal. You’ll see that pretty frequently. I forget exactly why that is the case, but there is no guarantee that those numbers will not have gaps.

I think so. Here’s what I tried today:

  1. Create a logs@custom template component (to be included by the default logs template)
  2. Set a data retention of 2h there and also index.lifecylce.prefer_ihm to false
  3. Delete an already created data stream
  4. Wait for the data stream to be recreated (it constantly receives logs via logstash)

Now if I check the data stream all seems to look fine: data retention is 2h.

But there’s no rollover after 2h.

Preview of the “logs” template
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "logs",
          "prefer_ilm": "false"
        },
        "codec": "best_compression",
        "routing": {
          "allocation": {
            "include": {
              "_tier_preference": "data_hot"
            }
          }
        },
        "default_pipeline": "logs@default-pipeline",
        "mapping": {
          "total_fields": {
            "ignore_dynamic_beyond_limit": "true"
          },
          "ignore_malformed": "true"
        },
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "_data_stream_timestamp": {
        "enabled": true
      },
      "dynamic_templates": [
        {
          "ecs_timestamp": {
            "match": "@timestamp",
            "mapping": {
              "ignore_malformed": false,
              "type": "date"
            }
          }
        },
        {
          "ecs_message_match_only_text": {
            "path_match": [
              "message",
              "*.message"
            ],
            "unmatch_mapping_type": "object",
            "mapping": {
              "type": "match_only_text"
            }
          }
        },
        {
          "ecs_non_indexed_keyword": {
            "path_match": [
              "*event.original",
              "*gen_ai.agent.description"
            ],
            "mapping": {
              "doc_values": false,
              "index": false,
              "type": "keyword"
            }
          }
        },
        {
          "ecs_non_indexed_long": {
            "path_match": "*.x509.public_key_exponent",
            "mapping": {
              "doc_values": false,
              "index": false,
              "type": "long"
            }
          }
        },
        {
          "ecs_ip": {
            "path_match": [
              "ip",
              "*.ip",
              "*_ip"
            ],
            "match_mapping_type": "string",
            "mapping": {
              "type": "ip"
            }
          }
        },
        {
          "ecs_wildcard": {
            "path_match": [
              "*.io.text",
              "*.message_id",
              "*registry.data.strings",
              "*url.path"
            ],
            "unmatch_mapping_type": "object",
            "mapping": {
              "type": "wildcard"
            }
          }
        },
        {
          "ecs_path_match_wildcard_and_match_only_text": {
            "path_match": [
              "*.body.content",
              "*url.full",
              "*url.original"
            ],
            "unmatch_mapping_type": "object",
            "mapping": {
              "fields": {
                "text": {
                  "type": "match_only_text"
                }
              },
              "type": "wildcard"
            }
          }
        },
        {
          "ecs_match_wildcard_and_match_only_text": {
            "match": [
              "*command_line",
              "*stack_trace"
            ],
            "unmatch_mapping_type": "object",
            "mapping": {
              "fields": {
                "text": {
                  "type": "match_only_text"
                }
              },
              "type": "wildcard"
            }
          }
        },
        {
          "ecs_path_match_keyword_and_match_only_text": {
            "path_match": [
              "*.title",
              "*.executable",
              "*.name",
              "*.working_directory",
              "*.full_name",
              "*file.path",
              "*file.target_path",
              "*os.full",
              "*email.subject",
              "*vulnerability.description",
              "*user_agent.original"
            ],
            "unmatch_mapping_type": "object",
            "mapping": {
              "fields": {
                "text": {
                  "type": "match_only_text"
                }
              },
              "type": "keyword"
            }
          }
        },
        {
          "ecs_date": {
            "path_match": [
              "*.timestamp",
              "*_timestamp",
              "*.not_after",
              "*.not_before",
              "*.accessed",
              "created",
              "*.created",
              "*.installed",
              "*.creation_date",
              "*.ctime",
              "*.mtime",
              "ingested",
              "*.ingested",
              "*.start",
              "*.end",
              "*.indicator.first_seen",
              "*.indicator.last_seen",
              "*.indicator.modified_at",
              "*threat.enrichments.matched.occurred"
            ],
            "unmatch_mapping_type": "object",
            "mapping": {
              "type": "date"
            }
          }
        },
        {
          "ecs_path_match_float": {
            "path_match": [
              "*.score.*",
              "*_score*"
            ],
            "path_unmatch": "*.version",
            "unmatch_mapping_type": "object",
            "mapping": {
              "type": "float"
            }
          }
        },
        {
          "ecs_usage_double_scaled_float": {
            "path_match": "*.usage",
            "match_mapping_type": [
              "double",
              "long",
              "string"
            ],
            "mapping": {
              "scaling_factor": 1000,
              "type": "scaled_float"
            }
          }
        },
        {
          "ecs_geo_point": {
            "path_match": "*.geo.location",
            "mapping": {
              "type": "geo_point"
            }
          }
        },
        {
          "ecs_flattened": {
            "path_match": [
              "*structured_data",
              "*exports",
              "*imports"
            ],
            "match_mapping_type": "object",
            "mapping": {
              "type": "flattened"
            }
          }
        },
        {
          "ecs_gen_ai_integers": {
            "path_match": [
              "*gen_ai.request.max_tokens",
              "*gen_ai.usage.input_tokens",
              "*gen_ai.usage.output_tokens",
              "*gen_ai.request.choice.count",
              "*gen_ai.request.seed"
            ],
            "mapping": {
              "type": "integer"
            }
          }
        },
        {
          "ecs_gen_ai_doubles": {
            "path_match": [
              "*gen_ai.request.temperature",
              "*gen_ai.request.top_k",
              "*gen_ai.request.frequency_penalty",
              "*gen_ai.request.presence_penalty",
              "*gen_ai.request.top_p"
            ],
            "mapping": {
              "type": "double"
            }
          }
        },
        {
          "all_strings_to_keywords": {
            "match_mapping_type": "string",
            "mapping": {
              "ignore_above": 1024,
              "type": "keyword"
            }
          }
        }
      ],
      "date_detection": false,
      "properties": {
        "@timestamp": {
          "type": "date",
          "ignore_malformed": false
        },
        "data_stream": {
          "properties": {
            "dataset": {
              "type": "constant_keyword"
            },
            "namespace": {
              "type": "constant_keyword"
            },
            "type": {
              "type": "constant_keyword",
              "value": "logs"
            }
          }
        }
      }
    },
    "aliases": {},
    "lifecycle": {
      "enabled": true,
      "data_retention": "2h"
    },
    "data_stream_options": {
      "failure_store": {
        "enabled": true
      }
    }
  }
}
1 Like