Limit storage needs by automatically remove data after 28 days

I have my small test environment up and running. It is collecting data from different sources.
Now i wonder how i can handle the effective file storage. Only 1 node in the cluster - non productive.
Is it possible to configure the system to automatically remove data older than 28 days or - what would be much better - by setting a storage limit to 60 GB so that old data above this limit will be deleted?
I checked this forum but did not find the answer.
Any hint for me?

ILM is exactly for this... perhaps take a look.

What are you using to ingest your data?

Thanks for this hint. I had been there but i wondered where i could see an "overview" of the data space needed?
The value i can see in "index management" ar far away from the reason why my HD run out of space?
I am using "Elastic Agent" and "System" integrations for the moment and "Elastic Defend" also.
What i'd like to have is a kind of "Overview" over all "indices" or what else is using space so i could focus on the "big ones"
Is something like this configurable or available?
Guess i will have to take a close look into the ILM Manual :wink:

I think there is a storage analyzer coming in the next couple releases

Otherwise.

Stack Management - Index Management
Show hidden indices
Sort by size

Also the in Kibana -> Dev Tools

Will show nodes and space
GET /_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r

Will list indices descending by size
GET /_cat/indices/*?v&s=pri.store.size:desc

Adjusting ILM is the proper method ...
All these indices already have default ILM policies
You can adjust them, it will warn you are adjust "System Default" but it is fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.