we had the situation that our logging-and-metrics cluster get a lot of logs, because some trouble with a cluster. This than resulted in two of three nodes had 100% disk space usage. We tried to restart the cluster but afterwards only a bit more space was free. One node was still on 100% disk usage. So we decided to configure temporary a shorter retention rate:
The problem was now that we had to wait for the automatic job to clean the older logs in the night.
Setup:
ECE 2.2
Now my question:
Is there any possibility to manually trigger the retention clean-up job? It is very unsatisfying to wait for the job to run automatically.
Hello @JPT,
unfortunately that is not possible. However, you can delete indices by sending delete requests to the cluster (there is a handy tool in the admin console UI called Console API. It allows you to send requests to the cluster).
Also, you can temporarily add more disk space to a particular node or set of nodes, via "override disk quota"
thank you for the explanation. The workaround with temporarily override the disk quota is useful.
But deleting the indices manual is not satisfying, because the risk to make a mistake in a manual task is given.
Will the behavior be different, when the loggin-and-metrics cluster will be on version 7?
Could we then use the Console API to trigger the index lifecycle management?
ECE 2.2 allows you to upgrade logging cluster. So, If you upgrade that cluster to a version (6.7.0+) that supports index lifetime management, you can use that feature but keep in mind that the curation embedded into ECE will still work too.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.