What are the best practices for data retention?

Hello, I am wondering what is the best practices for handling a big data indexes. For example if a firewall sends way too much logs to elastic stack, what is the best way to handle this situation?

Is it possible to archive data but be able to read it?

Best practise is to use [ILM]ILM: Manage the index lifecycle | Elasticsearch Reference [7.11] | Elastic).

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.