Elastic cloud kills itself with logs

Hi, idk where to post a bug for elastic to fix, so im trying here.

When you pick the smallest CPU optimized instance, it sets logs index lifecycle with 50 GB storage limit, and says its managed index, and if you change it you might break kibana.

The issue here is that the smallest instance has 35GB storage, and dies once you fill it with the logs. You might want to fix that.

Hi @DanHampl Welcome to the community and thanks for trying Elastic Cloud...

Yup that can happen but Not a bug / By Design... there is a long history to the 50GB shard / ILM policies etc. The defaults are designed logs ingestion the with a medium scale and and cluster size that fits more common use cases with "sensible" defaults.

Also Default ILM Policy has unlimited retention (i.e. it does not delete any data), it is up to the user to apply their own ILM policy etc... so if the user never apply ILM changes / Own ILM policies the cluster will eventually fill up.

Logs with default ILM not is intended for 1GB sized Nodes

You can change the Managed ILM policy it won't hurt anything... Even with smaller shard size you will still run out of room since the default ILM does not delete..

To me, that sounds like a bad design. Having something marked as "do not touch or you might break your environment" and without touching it having it actually break the environment...

Thanks for the recommendations though. already set those up and it all seems to be working.

1 Like

Yup.. bad design on the message. Agree.

You are welcome to open an issue against the elasticsearch repo.

As a side note, people have been filling up their elasticsearch disks since the first day it was released. :slight_smile: