Disk space problem for ELK stack running in an azure VM

I have installed ELK stack on Azure VM. The details of my system-
Ubuntu 18.04
ELasticsearch- 7.8.1

I have configured curator script and runing it every morning and also I am clearing logs according to its size but still I see that the disk size is increasing. here is the disk current condition:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 794M 712K 793M 1% /run
/dev/sdb1 29G 24G 5.9G 80% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sdb15 105M 3.6M 101M 4% /boot/efi
/dev/sda1 16G 45M 15G 1% /mnt
tmpfs 794M 0 794M 0% /run/user/1000

So ultimately clearing index logs does not helping to decrease the disk size here
/dev/sdb1 29G 24G 5.9G 80% /

any suggestion please.

/dev/sdb1 is the temp disk on Azure VMs, mounted on /mnt on Ubuntu.

Be mindful that this disk is ephemeral in nature and not persistent; A VM can be moved to a different host at any point in time for various reasons, including hardware failures. When this happens, the VM will be created on the new host using the OS disk from the storage account, and new temp storage will be created on the new host, so you will lose any data on that disk.

Since you're using Elasticsearch 7.8.1, take a look at Index Lifecycle Management (ILM) for managing indices. You can create a lifecycle policy to delete older indices. I don't know if this will solve the problem you're having, but it may work better for deleting older indices than using curator.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.