Hi Team,
I have 1 node elasticsearch cluster, which has 60GB RAM , 1.2 TB disk. I have 1 filebeat and 1 metricbeat index each receiving 6 GB data per day and 1 apm index which is receiving 10GB data per day. I want to retain 60 days' data and delete 1st day's data on 61st day (for each index). If you could propose strategy for implementing this ILM, it would be really helpful. Also I have below questions:
How much disk space I would require?
What would be effect on CPU usage for this much data and do I need to increase RAM?
How increasing shards for indices will help in this situation?
Since your questions are not filebeat/metricbeat specific but mostly about Elasticsearch, could you please redirect them to Elasticsearch's topics so as to get help from the team there?
Please make sure you have read the Elasticsearch ILM documentation, as well as the relevant docs for both of your Beats. Once you've got that setup, configure your policy to match the required retention rate and you should be good to go!
Hi Team,
If I follow this approach , will it be correct for 2 node cluster with one HOT and 1 WARM node? Also is it possible to have HOT and WARM phases in the same node? ( Note: 24GB data is getting ingested per day and filebeat, metircbeat , apm-transaction and apm-span are having 12 indices each. So at a time 4 indices, one from each agent , will be in HOT phase)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.