Hi, I've been having issues with the S3 output plugin leaving files open after they're deleted. This is causing disk space to fill up because space is not entirely freed until the file is closed. Once the process is killed/restarted, the files are released and space is freed up.
I'm wondering if there's some file handler cleanup that's not happening quite right, or maybe a config option interaction that's unexpected.
My interim solution right now is to restart the process every hour, which is less than ideal.
Details below, please let me know what else I can do to help out.
Config example:
output { s3 { access_key_id => "XYZ" secret_access_key => "XYZ" bucket => "XYZ" prefix => "network/%{+yyyy}-%{+MM}-%{+dd}/%{type}/" #time_file => 7 temporary_directory => "/tmp/logstashS3" # Rotate files every hour, or 100M, which ever comes first. size_file => 100000000 time_file => 60 codec => "json_lines" upload_workers_count => 2 encoding => "gzip" } }
Open files (there's a lot more, but here's a snippet).
# lsof -n | grep '(deleted)' S3 15298 15439 root 88w REG 253,4 100003250 5245314 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.17.part1.txt.gz (deleted) S3 15298 15439 root 97w REG 253,4 100007529 524422 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.29.part2.txt.gz (deleted) S3 15298 15450 root 88w REG 253,4 100003250 5245314 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.17.part1.txt.gz (deleted) S3 15298 15450 root 97w REG 253,4 100007529 524422 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.29.part2.txt.gz (deleted) S3 15298 15451 root 88w REG 253,4 100003250 5245314 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.17.part1.txt.gz (deleted) S3 15298 15451 root 97w REG 253,4 100007529 524422 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.29.part2.txt.gz (deleted) S3 15298 15452 root 88w REG 253,4 100003250 5245314 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.17.part1.txt.gz (deleted) S3 15298 15452 root 97w REG 253,4 100007529 524422 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.29.part2.txt.gz (deleted) S3 15298 15453 root 88w REG 253,4 100003250 5245314 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.17.part1.txt.gz (deleted) S3 15298 15453 root 97w REG 253,4 100007529 524422 /tmp/logstashS3/UUID/network/2018-02-16/panos/ls.s3.UUID.2018-02-16T10.29.part2.txt.gz (deleted)
JVM Version:
[root@netlog opt]# java -version openjdk version "1.8.0_161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
OS Version:
[root@netlog opt]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.4 (Maipo)
The issue appears to happen on logstash 5.5.3 and 6.2.1.
[root@netlog opt]# logstash-6.2.1/bin/logstash-plugin list --verbose | grep output-s3 logstash-output-s3 (4.0.13) [root@netlog opt]# logstash-5.5.3/bin/logstash-plugin list --verbose | grep output-s3 logstash-output-s3 (4.0.10)