Elasticsearch Index management strategy

Does Elasticsearch have a mechanismr like docke for storing logs?
Docker log management divides logs into many files, and during rolling updates, only the oldest log files are deleted, rather than deleting the entire index like ES.
I need a logging mechanism similar to docker to meet the requirements of stress testing.

In Elasticsearch you generally store data in data streams, which are backed by a number of indices where each cover a period of time. When you delete data you delete complete indices holding the oldest data. This is generally automated using ILM.

Thank you for your answer!
If I use the logstash format to create a new index every day and set a deletion time of 7 days in ILM, and during the stress test on that day, the number of indexed documents is too high(reaching the upper limit of PVC), resulting in the pod of k8s crashing, what should I do?
(ES is deployed in the form of pod in k8s and the data of ES is mounted on the physical server through NFS)

Do not use Logstash to create an index per calendar day. Instead create a data stream with rollover. This allows you to create new backing indices based on time (when low ingest volumes) or data volume (during periods of high ingest). You can set the maximum shard/index size so that you can hold e.g. 10-20 indices at full size.

ILM is usually used to manage retention, but can olny delete based on time , not total size. This is however something that Curator supports, so you may want to use this instead of ILM. You can then set a maximum size your indices are allowed to take up, and if this size is exceeded, Curator will periodically check and delete the oldest indices until the total size is blow the threshold.

The use of NFS is generally not recommended as it can lead to very poor performance and possibly also stability issues if not mounted correctly.

Thanks for your answer!
I carefully studied what you said about Curator yesterday,it can meet my needs!

You mentioned that it is not recommended to use nfs to mount the storage of es, so how can I solve the storage problem of es in the k8s cluster?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.