Lifecycle Policy

I created a lifecycle policy in Kibana and attached it to a data stream, for example:

logs-system.syslog-default

However, the indices it created, such as:

.ds-logs-system.syslog-default-2025.08.21-000001

are still showing the old policy.

Do I need to manually update each index, or since this is a data stream, will the policy applied to the data stream (in this case logs-system.syslog-default) automatically propagate to its backing indices?

Thanks.

Any change on the template of a data stream will only be applied to new backing indices, the already existing back indices will not change.

You would need to edit each backing indice in the Index Management page and change the policy name.

1 Like

I tested it on a specific index — it worked, and the data moved to the cold node.
However, it still appears under the hot node as well. Why is that?

In Stack Monitoring, when I select the hot node (as well as the cold node), I see the same shard listed in the hot also cold. @leandrojmp

You need to provide more context about your nodes.

What are the roles you have in elasticsearch.yml for each node?

Also, share a screenshot of what you are seeing, it is not clear exactly what you are seeing.

Here is Hot data node .yam configuration file:

node.roles: [data, data_content, data_hot, data_warm, ingest, master, remote_cluster_client, transform]

and here is cold:
node.roles: [data_cold,master, voting_only]

See screenshot from cold machine:

See screenshot from hot machine:

Your hot node have the data role, this puts the node in all tiers and takes precedence over specialized roles. [documentation]

You need to remove the data role from the hot nodes and restart them.

1 Like

Do I need to take any additional steps to ensure it is moved completely,or just remove the data roles and next restart, so that it no longer shows up under both nodes?

How many nodes of each tier do you have?

Since you have replicas you need at least 2 nodes of each tier, or else your cluster will be in a yellow state and you would need to remove the replicas to change the state back to green.

It is just test environment

Hot A node.roles: [ data, data_content, data_hot, data_warm, ingest, master, remote_cluster_client, transform ]

Hot B node.roles: [ data, data_content, data_hot, data_warm, ingest, master, remote_cluster_client, transform ]

Cold node.roles: [ data_cold, master, voting_only ]

I want just move completely move in cold node some specific indices without stay on the hot node(s)

Then you just need to remove the data role from your nodes and restart them, the cluster will organize the shards.

2 Likes

After I removed the data role from the YAML file, I tried adding the Windows integration for testing, but it now gets stuck at this stage.

Do you have any idea what might be causing this? @leandrojmp

Screenshot: