Logstash 8.15.2 Error :exception=>#<Errno::ENOENT: No such file or directory - No such file or directory - /tmp/logstash/applog_iq/42dac6d2-4c55-4e75-93d8-044b9f04e1ab/

Hello,
We recently split up one big pipeline we had into 7 pipelines. Ever since we did that, we get the below error when doing a deployment.

Pipeline error {:pipeline_id=>"applog-iq", :exception=>#<Errno::ENOENT: No such file or directory - No such file or directory - /tmp/logstash/applog_iq/42dac6d2-4c55-4e75-93d8-044b9f04e1ab/IQ-Carelle_Test/IQ-Carelle_Test/Purchase Order Portal (PO Portal)/production/application/jboss.server/server.log/TEST-SERVER/2024/11/ls.s3.d059db3f-a61d-450a-8734-60a15q7831c.2024-11-06T15.28.part0.txt>, :backtrace=>["org/jruby/RubyFileTest.java:249:in `size'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/temporary_file.rb:50:in `size'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3.rb:370:in `upload_file'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3.rb:274:in `block in close'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/file_repository.rb:89:in `block in each_files'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/file_repository.rb:132:in `block in each_factory'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/file_repository.rb:27:in `block in with_lock'", "org/jruby/ext/monitor/Monitor.java:82:in `synchronize'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/file_repository.rb:26:in `with_lock'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/file_repository.rb:131:in `block in each_factory'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/file_repository.rb:129:in `each_factory'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3/file_repository.rb:88:in `each_files'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-integration-aws-7.1.8-java/lib/logstash/outputs/s3.rb:273:in `close'", "/usr/share/logstash/logstash-core/lib/logstash/plugin.rb:98:in `do_close'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:75:in `do_close'", "org/jruby/RubyArray.java:1981:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:484:in `shutdown_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:209:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:146:in `block in start'"], "pipeline.sources"=>["/etc/logstash/conf.d/applog-iq.config"], :thread=>"#<Thread:0x2de3b9be /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}

I read some previous posts on this issue and it was recommended to create a temporary directory for each pipeline. That was done, hence the path /tmp/logstash/applog-i* for each pipeline.

As a workaround, when this happens we will stop logstash, delete the /tmp/logstash directory, and restart logstash. However, this does not always work and we have to do this multiple times. Also when we do the restart the error may populate on another pipeline.

We didn't have this issue when we have the one big pipeline.

Would you please help us resolve this?

The exception is happening when the pipeline is shutting down. The output stashes events in a temporary file before sending them to s3. When it shuts down the output iterates over the files in its temporary directory. If they contain any data they are sent to s3 (or will be when the pipeline restarts). If they are empty then they are deleted.

The exception is happening when it tests whether file.size == 0 to decide what to do with it. Something else has already deleted it.

To me it seems the most likely cause would be that you actually have two outputs using the same temporary directory.

You were correct Badger. We did indeed have pipelines using the same temporary directory. We separated those as well and now everything is working as expected. Thank you very much for the input.

1 Like