My ELK stack the /dev/mapper/centos-root directory is getting full (95%)

Hello All,
Trying to figure out how, but still no luck.
If someone can help me out here in this.
My Elasticsearch node is getting full

[root@srvde432 nodes]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 7.8G     0  7.8G   0% /dev
tmpfs                    7.8G     0  7.8G   0% /dev/shm
tmpfs                    7.8G  792M  7.0G  10% /run
tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G   48G  3.0G  95% /
/dev/sda2               1014M  236M  779M  24% /boot
/dev/mapper/centos-home  2.0T  1.7T  293G  86% /home
tmpfs                    1.6G     0  1.6G   0% /run/user/0

Is there any way I can delete the old files and make sure that to delete files when the nodes gets to 90% ?

Also ran

[root@srvde432 bin]# curator --config /etc/curator/config.yml /etc/curator/action.yml

To delete files that are older than 90 days.

However, even after deleting old indices it still shows as 95 %

[root@srvde432 /]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 7.8G     0  7.8G   0% /dev
tmpfs                    7.8G     0  7.8G   0% /dev/shm
tmpfs                    7.8G  720M  7.1G  10% /run
tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G   48G  3.0G  95% /
/dev/sda2               1014M  236M  779M  24% /boot
/dev/mapper/centos-home  2.0T  751G  1.3T  38% /home
tmpfs                    1.6G     0  1.6G   0% /run/user/0

````````````````
Elasticsearch version :
Version: 7.12.0, Build: default/rpm/78722783c38caa25a70982b5b042074cde5d3b3a/2021-03-18T06:17:15.410153305Z, JVM: 15.0.1

logstash Version:
logstash 7.12.0

[root@srvde432 bin]# cat /etc/curator/action.yml
actions:
  1:
    action: delete_indices
    description: Delete_indices_older_90_days
    options:
      ignore_empty_list: True          # Create INFO if empty file if "False" create ERROR and exit
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: (collectd|opt)-*
      exclude:
    - filtertype: age                   # Filter old age index
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 90
  2:
    action: delete_indices
    description: Delete_indices_older_180_days
    options:
      ignore_empty_list: True          # Create INFO if empty file if "False" create ERROR and exit
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: (afm|apm|asm)*
      exclude:
    - filtertype: age                   # Filter old age index
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 180

[root@srvde432 /]# cat /etc/curator/config.yml
client:
  hosts:
  - 127.0.0.1
  port: 9200
  url_prefix:
  use_ssl: False
  certificate:
  client_cert:
  client_key:
  ssl_no_validate: False
  http_auth:
  timeout: 30
  master_only: False

logging:
  loglevel: INFO
  logfile:
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']
[root@srvde432 /]#

The logstsah log file is keeping the old files from 2020 as well.
The log4j.properties files was set to default , hence I guess it did not delete.
I set the log4j.properties files with following additional lines
Restarted logstash service to see if that delete old files.

That did not help.
Did a rm command to remove the files with command 'rm file_name_2020*'
to delete all files named 2020 , I was able to but still when I check the directory again , the files does gets deleted with rm command (even though no error in deleting it ).