There is a useful command for top 10 largest directories: du -h / | sort -rh | head -10
In few iterations, for instance 1. du -h /var 2. du -h /var/logs will lead you what to clean.
If you don't need, remove/move old ES logs, then if is possible clean /tmp directory, then other unused directories. Be aware, if you delete for instance 100 MB data will have immediately -100 MB on the disk and Lucene has own internal mechanism for releasing free disk space.
I think issues are with tasks or processes which are not releasing disk. These processes start to resume when I start service.
After adding another disk to the cluster it starts consuming that disk space also.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.