Logstash JDBC file permission errors, cannot send data to elastic

Hi,
I keep getting errors when running JDBC, it seems to complain about being unable to read to files on disc but I have checked the permissions and there is write access. The problem seems to become worse when having an "order by" in the sql query, but the error pops up intermittently without it too. Changing fetch request size doesn't help.

This is run in pipeline with 20 variations of the same query, the only change is the database. The strange thing is that this exact setup used to work on the same machine.

Does anyone have any suggestions about what to do? We cant read any data into our stack.
Running Elastic and Logstash version 7.10.2 on windows.

Logs are from one pipeline only otherwise it would be extremely long.

// [2021-02-09T22:32:51,121][ERROR][org.logstash.ackedqueue.io.FileCheckpointIO][archive_jdbc] Error writing checkpoint: java.nio.file.AccessDeniedException: C:\logstash7102_3\data\queue\archive_jdbc\checkpoint.head.tmp -> C:\logstash7102_3\data\queue\archive_jdbc\checkpoint.head

// [2021-02-09T22:32:51,121][ERROR][logstash.javapipeline ][archive_jdbc] Pipeline error {:pipeline_id=>"archive_jdbc", :exception=>java.nio.file.AccessDeniedException: C:\logstash7102_3\data\queue\archive_jdbc\checkpoint.head.tmp -> C:\logstash7102_3\data\queue\archive_jdbc\checkpoint.head, :backtrace=>["java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89)", "java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)", "java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:309)", "java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)", "java.base/java.nio.file.Files.move(Files.java:1421)", "org.logstash.ackedqueue.io.FileCheckpointIO.write(FileCheckpointIO.java:104)", "org.logstash.ackedqueue.Page.forceCheckpoint(Page.java:225)", "org.logstash.ackedqueue.Page.headPageCheckpoint(Page.java:190)", "org.logstash.ackedqueue.Page.checkpoint(Page.java:179)", "org.logstash.ackedqueue.Page.ensurePersistedUpto(Page.java:218)", "org.logstash.ackedqueue.Queue.ensurePersistedUpto(Queue.java:527)", "org.logstash.ackedqueue.Queue.close(Queue.java:710)", "org.logstash.ackedqueue.ext.JRubyAckedQueueExt.close(JRubyAckedQueueExt.java:161)", "org.logstash.ext.JrubyAckedReadClientExt.close(JrubyAckedReadClientExt.java:69)", "org.logstash.execution.AbstractPipelineExt.close(AbstractPipelineExt.java:384)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$0$0$close.call(AbstractPipelineExt$INVOKER$i$0$0$close.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:831)", "org.jruby.ir.targets.InvokeSite.fail(InvokeSite.java:248)", "org.jruby.ir.targets.InvokeSite.fail(InvokeSite.java:255)", "C_3a_.logstash7102_3.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$run$0(C:/logstash7102_3/logstash-core/lib/logstash/java_pipeline.rb:202)", "C_3a_.logstash7102_3.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$run$0$__VARARGS__(C:/logstash7102_3/logstash-core/lib/logstash/java_pipeline.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)", "org.jruby.ir.targets.InvokeSite.fail(InvokeSite.java:248)", "org.jruby.ir.targets.InvokeSite.fail(InvokeSite.java:255)", "C_3a_.logstash7102_3.logstash_minus_core.lib.logstash.java_pipeline.RUBY$block$start$1(C:/logstash7102_3/logstash-core/lib/logstash/java_pipeline.rb:137)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)", "org.jruby.runtime.Block.call(Block.java:139)", "org.jruby.RubyProc.call(RubyProc.java:318)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:834)"], "pipeline.sources"=>["C:/logstash7101/config/pipeline_configs/archive/archive.conf"], :thread=>"#<Thread:0x2e65b7e2 run>"}

// [2021-02-09T22:32:51,137][INFO ][logstash.javapipeline ][archive_jdbc] Pipeline terminated {"pipeline.id"=>"archive_jdbc"}

did you already try removing archive_jdbc dir?

Yes, I have also put in a fresh Logstash install and get the same errors even then.

can you try without persistent binding? because clearly on error it says error writing checkpoint.

1 Like

I removed queue.type: persisted from pipelines.yml and now it seems to work! Thank you so much, this gives us a workaround for now at least!

Is there any risk of losing data by not using queue.type: persisted? I can't understand why it used to work and then these errors started appearing.

I don't know the reason. But I had same problem in past after cleaning up presisted dir. I was able to again use it. as logstash will create new dir and all proper subdir underneath it.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.