Despite the ILM policy applied to all logs (Managed), the indexes are moved to the Warm node but not deleted on the Hot node and I'm actually reaching disk capacity on the Hot node...
Here is my Elastic Cluster :
w - ovh-dataw-1
hist - ovh-datah-1
mr - ovh-master-2
mr - ovh-master-3
mr * ovh-master-1
c - ovh-datac-1
Here is the output of a problematic index :
index shard prirep state docs store dataset ip node
.ds-logs-winlog.winlog-dc-2023.12.06-000001 0 p STARTED 57415011 50gb 50gb 192.168.74.42 ovh-datah-1
.ds-logs-winlog.winlog-dc-2023.12.06-000001 0 r STARTED 57415011 50gb 50gb 192.168.74.40 ovh-dataw-1
Running this query :
GET /.ds-logs-winlog.winlog-dc-2023.12.06-000001/_ilm/explain?human
I get this interesting output :
"message": "[.ds-logs-winlog.winlog-dc-2023.12.06-000001] lifecycle action [migrate] waiting for [1] shards to be moved to the [data_warm] tier (tier migration preference configuration is [data_warm, data_hot])",
And for new indexes, as I'm using a Managed Index Template but there are many... I'm not sure how to proceed.
Should I need to edit only "logs" or each "logs-elastic_agent.XXX"?
Finally, for setting replica to 0 for new indexes, I go for editing all index templates manually.
That was tedious but doable in less than 10min.
Thanks again for your help!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.