Logstash fatal error: Not enough free disk space

Hey guys,

I have a problem with logstash, I have configured a persisted queue to process the events of logstash. This setup worked, but I increased the RAM from the system an restarted the server, since that Logstash throws me a uncommon error:

[2018-08-09T10:13:24,771][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaIo::IOException", :message=>"Not enough free disk space available to allocate persisted queue.", :backtrace=>["org.logstash.ackedqueue.Queue.ensureDiskAvailable(Queue.java:789)", "org.logstash.ackedqueue.Queue.open(Queue.java:212)", "org.logstash.ackedqueue.ext.JRubyAckedQueueExt.open(JRubyAckedQueueExt.java:96)", "org.logstash.ackedqueue.ext.JRubyWrappedAckedQueueExt.ruby_initialize(JRubyWrappedAckedQueueExt.java:37)", "org.logstash.ackedqueue.ext.JRubyWrappedAckedQueueExt$INVOKER$i$0$7$ruby_initialize.call(JRubyWrappedAckedQueueExt$INVOKER$i$0$7$ruby_initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:743)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:298)", "org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:79)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:83)", "org.jruby.RubyClass.newInstance(RubyClass.java:1022)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.queue_factory.RUBY$method$create$0(/usr/share/logstash/logstash-core/lib/logstash/queue_factory.rb:23)", "usr.share.logstash.logstash_minus_core.lib.logstash.queue_factory.RUBY$method$create$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/queue_factory.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:170)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:298)", "org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:79)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:83)", "org.jruby.RubyClass.newInstance(RubyClass.java:1022)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:93)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:305)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:145)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71)", "org.jruby.runtime.Block.call(Block.java:124)", "org.jruby.RubyProc.call(RubyProc.java:289)", "org.jruby.RubyProc.call(RubyProc.java:246)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:104)", "java.lang.Thread.run(Thread.java:748)"]} 
[2018-08-09T10:13:24,916][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::JavaIo::IOException` for `PipelineAction::Create<main>`>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/converge_result.rb:27:in `create'", "/usr/share/logstash/logstash-core/lib/logstash/converge_result.rb:67:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:317:in `block in converge_state'"]}
[2018-08-09T10:13:24,993][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

I dont know why logstash is throwing this error because I have more than 50GB availlable to store the persisted queue on disk.

Can anyone help?

Thanks
Best regards,
Robert

Which version of Logstash? What's the inode situation on that volume? An inode shortage will result in the same error as a space shortage.

I use Logstash 6.3.

Inode situation:
Inodes: 52426752
IUsed: 39496
IFree: 52387256

Okay, so that looks good. Can you enable debug logging? That'll enable the following log statement which includes the value of this.dirPath so that we can verify that it's actually checking the available space on the volume where you have 50 GB free:

@magnusbaeck I have enabled debug logging, what type of information do you need now?

The log entry that begins with "opening head page", seen above.

Here ist the log entry:

[2018-08-09T12:52:42,133][DEBUG][org.logstash.ackedqueue.Queue] opening head page: 1603, in: /../data/queue/main, with checkpoint: pageNum=1603, firstUnackedPageNum=1603, firstUnackedSeqNum=54048970, minSeqNum=54036331, elementCount=12689, isFullyAcked=no

Is /data really the intended Logstash data directory? Are you using a relative path somewhere where an absolute path is expected?

Yes data is the right directory and no I dont you a relative path to the data directory (where is a relative path expected?).

Sorry, I meant absolute path (previous post updated). Assuming df /data/queue reports 50 GB free space I have no idea what's going on.

1K-blocks
104802308

Used
15964016

Available
88838292

Use%
16%

So there is no solution at this moment?

As I said I have no idea what's going on.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.