/dev/sda1 used from 6.2g 13% to 49g 100% in 4 days

The /dev/sda1 used went from 6.2g 13% to 49g 100% in 4 days.

The memory free went from 51g to 2g in 4 days.

How do I manage it?

8/3:
-- reboot
-- /dev/sda1 6.2g 13%
-- memory free 51g

after 4 days.

8/7:
-- /dev/sda1 49g 100%
-- memory free 2g

df -h

image

free -g

sudo du --max-depth=1 --human-readable / | sort --human-numeric-sort

image

I have only elk running on this vm.

I have only one logstash job running.

elk is running from /datadrive and not from os drive.

/datadrive/usr_share_logstash/logstash/bin/logstash

/datadrive/elasticsearch-6.0.0/bin/elasticsearch

/datadrive/kibana-6.0.0-linux-x86_64/bin/kibana

path_logs and path_data are set to /datadrive and not to os drive.

path.logs: /datadrive/elk/logstash/path_logs
path.data: /datadrive/elk/logstash/path_data

path.logs: /datadrive/elk/elasticsearch/path_logs
path.data: /datadrive/elk/elasticsearch/path_data

logging.dest: /datadrive/elk/kibana/path_logs/kibana.log
path.data: /datadrive/elk/kibana/path_data

ls -lh /datadrive/elk/kibana/path_logs

image

ls -lh /datadrive/elk/elasticsearch/path_logs

ls -lh /datadrive/elk/logstash/path_logs

image

If it magically gets freed up by a reboot then its probably a log file that you have deleted but is held open by a running process. Do you restart logrotate to rotate kibana logs and restart it nightly?

where do I check this? It is not in kibana.yml.

Do you restart logrotate to rotate kibana logs and restart it nightly?

Unless you have configured it, it will not be being restarted. You would use an external tool like logrotate to do it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.