Hi
I am running a Logstash 5.1.1 instance collecting and parsing syslogs. Output plugins are a combination of exec, sending syslog data to an executable, elasticsearch, and file, writing formatted output to a text file. The file output portion of the configuration file is:
file {
path => "/var/log/TrapDispatcher/td_digest.log"
codec => line {
format => "%{syslog_timestamp} %{syslog_source} %{syslog_input_format} [%{syslog_facility}:%{syslog_severity}] %{syslog_id} %{syslog_program}[%{syslog_pid}]: %{message_content}"
}
}
Everything works fine, except that at irregular intervals (sometimes hours, sometimes days), Logstash crashes with the following error:
[2017-08-16T03:19:18,506][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<IOError: No space left on device>, :backtrace=>["org/jruby/RubyIO.java:1431:in write'", "/apps/nm/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.2/lib/logstash/outputs/file.rb:296:in
write'", "/apps/nm/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.2/lib/logstash/outputs/file.rb:133:in multi_receive_encoded'", "org/jruby/RubyArray.java:1613:in
each'", "/apps/nm/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.2/lib/logstash/outputs/file.rb:133:in multi_receive_encoded'", "org/jruby/RubyHash.java:1342:in
each'", "/apps/nm/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.2/lib/logstash/outputs/file.rb:131:in multi_receive_encoded'", "org/jruby/ext/thread/Mutex.java:149:in
synchronize'", "/apps/nm/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.2/lib/logstash/outputs/file.rb:130:in multi_receive_encoded'", "/apps/nm/elk/logstash/logstash-core/lib/logstash/outputs/base.rb:90:in
multi_receive'", "/apps/nm/elk/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:in multi_receive'", "/apps/nm/elk/logstash/logstash-core/lib/logstash/output_delegator.rb:47:in
multi_receive'", "/apps/nm/elk/logstash/logstash-core/lib/logstash/pipeline.rb:420:in output_batch'", "org/jruby/RubyHash.java:1342:in
each'", "/apps/nm/elk/logstash/logstash-core/lib/logstash/pipeline.rb:419:in output_batch'", "/apps/nm/elk/logstash/logstash-core/lib/logstash/pipeline.rb:365:in
worker_loop'", "/apps/nm/elk/logstash/logstash-core/lib/logstash/pipeline.rb:330:in `start_workers'"]}
There is plenty of space left on the disk, and if I restart Logstash after the crash (without freeing up any disk space), it runs again perfectly until the next crash.
Has this been seen before? Is it a known problem? I have no idea how to debug this.
Thanks.