ElasticSearch doesn't start due to insufficient space

We are using ElasticSearch to store logs from around 15 servers in AWS environment. The disk space in ElasticSearch is running out and ElasticSearch isn't starting due to insufficient space. We are using X-pack on Kibana to utilize the Machine Learning feature of it. We are not sure, how the machine learning model will work if we delete older data. Can anyone suggest what is the best practice to implement this in production for this use-case.

@Lahari
have you need to store all data at a time? what is your RAM size of the server on which Elasticsearch is installed?

You can edit the size which elasticsearch is using via editing jvm.options file having heap size.
Another you can also use logrotate to compress and delete your old data and free the space( as per your requirement.)

I think the question is about hard disk size.

@Lahari Can you clarify that you are talking about disk or ram???

Getting rid of old indices shouldn't really interfere with machine learning, but I'm not 100% there.

To remove old indices use https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html

And, of course make sure you rotate logs etc. to free up disk space if Elasticsearch is using shared drives.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.