How do I use Index Lifecycle Management without rollover action?

I'm working on ELK stack about 3 months and collecting logs from various systems like servers, network devices etc. Working with 3 nodes.

I'm using monthly indices to keep the logs like "xserver-2019.10", "xswitch-2019.10".
The newly created indices using an index tamplate which is binded to an ILM policy.

I want to use the ILM without rollover so the monthly indices pass to "warm" node after 31 days from creation and then pass to "cold" node after 150 days from creation. The new monthly indice will be created on the new month so always there will be a writing index(xserver-2019.11) after the old index(xserver-2019.10) passed to "warm" node.

There is an option to disable rollover action on ILM policy configuration and I disabled it. Activated shrinking from 6 primary shards to 1 primary shard. But there is a message on the step info:

"message" : "Waiting for [6] shards to be allocated to nodes matching the given filters"

The index stucks at this step and not allocating to the warm node.

My nodes:
node-1 > box_type: hot
node-2 > box_type: warm
node-3 > box_type: cold

Index shard status

GET /_cat/shards/document-2019.10?v

index shard prirep state docs store ip node
document-2019.10 4 p STARTED 21 181.5kb es-node-2
document-2019.10 4 r STARTED 21 181.5kb es-node-3
document-2019.10 5 r STARTED 20 195.4kb es-node-2
document-2019.10 5 p STARTED 20 195.4kb es-node-3
document-2019.10 3 p STARTED 14 166.2kb es-node-1
document-2019.10 3 r STARTED 14 166.2kb es-node-2
document-2019.10 1 r STARTED 18 245.9kb es-node-1
document-2019.10 1 p STARTED 18 245.9kb es-node-2
document-2019.10 2 r STARTED 11 109.4kb es-node-1
document-2019.10 2 p STARTED 11 109.4kb es-node-2
document-2019.10 0 p STARTED 14 166kb es-node-1
document-2019.10 0 r STARTED 14 166kb es-node-2

Cluster allocation explain

GET /_cluster/allocation/explain

{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"
}
],
"type": "illegal_argument_exception",
"reason": "unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"
},
"status": 400
}

Index settings

GET document-2019.10/_settings

{
"document-2019.10" : {
"settings" : {
"index" : {
"lifecycle" : {
"name" : "my-ilm-policy"
},
"routing" : {
"allocation" : {
"require" : {
"box_type" : "warm"
}
}
},
"number_of_shards" : "6",
"provided_name" : "document-2019.10",
"creation_date" : "1570703493600",
"priority" : "50",
"number_of_replicas" : "1",
"uuid" : "eLNWEqF6SbqfGcztJq8byQ",
"version" : {
"created" : "7030299"
}
}
}
}
}

Index Template

GET _template/my-index-template

{
"grid-index-template" : {
"order" : 0,
"index_patterns" : [
"",
"-.
"
],
"settings" : {
"index" : {
"lifecycle" : {
"name" : "grid-ilm-policy"
},
"number_of_shards" : "6"
}
},
"mappings" : { },
"aliases" : { }
}
}

ILM Explain

GET document-2019.10/_ilm/explain

{
"indices" : {
"document-2019.10" : {
"index" : "document-2019.10",
"managed" : true,
"policy" : "my-ilm-policy",
"lifecycle_date_millis" : 1570703493600,
"phase" : "warm",
"phase_time_millis" : 1570707576602,
"action" : "allocate",
"action_time_millis" : 1570708176858,
"step" : "check-allocation",
"step_time_millis" : 1570708176989,
"step_info" : {
"message" : "Waiting for [6] shards to be allocated to nodes matching the given filters",
"shards_left_to_allocate" : 6,
"all_shards_active" : true,
"actual_replicas" : 1
},
"phase_execution" : {
"policy" : "my-ilm-policy",
"phase_definition" : {
"min_age" : "1h",
"actions" : {
"allocate" : {
"number_of_replicas" : 1,
"include" : { },
"exclude" : { },
"require" : {
"box_type" : "warm"
}
},
"shrink" : {
"number_of_shards" : 1
},
"set_priority" : {
"priority" : 50
}
}
},
"version" : 5,
"modified_date_in_millis" : 1570695626874
}
}
}
}

I believe in this case it may be because your replicas cannot move to the "warm" node, so it's waiting until they can. Are you able to add a second warm node (or reduce the replica count to 0 but that is dangerous)?

Hi @dakrone,

I don't make any changes but this issue has solved somehow. Now the monthly indices matching with my template and then the index lifecycle policy. But there is a minimal issue with system indices which matching with my template. I don't want the system indices with my template. The matching system indices:

.monitoring-es-7-*
.monitoring-kibana-7-*

Other system indices matching their default templates but this two are matching with my template. I changed the value of my template order to "-1" but it isn't work. Could you please inform me about this?

GET _cat/templates

logstash [logstash-] 0 60001
.watch-history-10 [.watcher-history-10
] 2147483647
grid-index-template [] -1
.data-frame-notifications-1 [.data-frame-notifications-
] 0 7030299
.watches [.watches*] 2147483647
.ml-config [.ml-config] 0 7030299
.management-beats [.management-beats] 0 70000
.triggered_watches [.triggered_watches*] 2147483647
.ml-state [.ml-state*] 0 7030299
.monitoring-alerts-7 [.monitoring-alerts-7] 0 7000199
.monitoring-alerts [.monitoring-alerts-6] 0 6070299
.logstash-management [.logstash] 0
.kibana_task_manager [.kibana_task_manager] 0 7030299
.ml-notifications [.ml-notifications] 0 7030299
.data-frame-internal-1 [.data-frame-internal-1] 0 7030299
.ml-anomalies- [.ml-anomalies-] 0 7030299
.monitoring-logstash [.monitoring-logstash-7-
] 0 7000199
.monitoring-es [.monitoring-es-7-] 0 7000199
.monitoring-beats [.monitoring-beats-7-
] 0 7000199
.ml-meta [.ml-meta] 0 7030299
.watch-history-9 [.watcher-history-9*] 2147483647
.monitoring-kibana [.monitoring-kibana-7-*] 0 7000199

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.