I have a very basic question but one which I haven't seen asked or answered directly.
Is elasticsearch disk usage intended to be infinite? apart from deleting indices or their data using a manual or manually created scripted process, is there no way to set a limit on how much disk space an index will use?
Starting with version 6.0 you can set
cluster.routing.allocation.disk.watermark.flood_stage (by default is
95%) which will switch an index to readonly if the disk usage in which the shard is allocated goes above this setting.
Thanks, so based on this I am inferring that users are intended to delete by query if they have a fixed disk size in mind for elasticsearch.
Yes, but keep in mind that there is not sense of index size quota (i.e. indices can use as much as disk space is available to them). The referred setting is on node level, meaning that many shards can fall into the same disk of a single node and the setting will limit the disk usage as a total sum of all shard sizes so it won't set a hard limit on a per shard basis.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.