Guys, how do I set a limit on disk usage in elasticsearch? My elasticsearch server has a disk space problem.
Have a look at the cluster.routing.allocation.disk.watermark.flood_stage
and cluster.routing.allocation.disk.watermark.high
settings at https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html (also, note that the flood_stage
one was added on 6.0, so you'll need to be on 6.x to take advantage of that setting).
The high
watermark tells Elasticsearch "when your disk is more than this percentage full, don't allocate shards to this node." The flood_stage
is "harsher" in that it marks an index read-only once this threshold is passed. Once you hit the flood stage, you really need to look to clean up some old data on the server.
I understand, however how do I automatically remove old indexes from elasticsearch, for example, remove after 30 days?
There's currently nothing "inside of" Elasticsearch that does that type of index curation. For automatically deleting indices, many people use curator. There's an example curator config that does almost exactly what you're looking for here (it's just 45 days instead of 30, so you can change that as needed).
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.