Elk server, reboot, 23 g was released

After reboot, disk /dev/sda1 used went down from 30 g to 7 g. The 23 g was released. It has happened few times. I have only elk running on it. Next time, how do I find what is taking that 23 g?

before reboot:

df -h

image

after reboot:

df -h

image

elk data and log paths are pointing to /mnt.

logging.dest: /mnt/elk/kibana/log/kibana.log
path.data: /mnt/elk/kibana/path_data

path.data: /mnt/elk/elasticsearch/path_data
path.logs: /mnt/elk/elasticsearch/path_logs

path.data: /mnt/elk/logstash/path_data
path.logs: /mnt/elk/logstash/path_logs

As you have probably figured out by now, given your other post, if you delete a file on UNIX, the space is not freed until the last process that has the file open exits. If you have a multi-GB log open in logstash, deleting it will not free any space, but rebooting will, because it kills the process that had the file open.

1 Like

If you are using the File Input plugin, you may want to consider adding a close_older directive to direct the plugin to close file handles when they've not received new bytes in a while.

Hi @databeata, this post is not directly related to your issue, is just a linux tipp:
I think that will be a good idea to have a different mount point for your data, maybe on a LVM volume for two reasons.

  1. when the / (root) filesystem is full you can experience many problems and maybe you will be not able to login to the server.
  2. the LVM can be dynamically extended, then if you data grow you can grow the volume without stopping any service.

Excuse me for the off-topic post
Bests

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.