Logstash + OpenShift - issues with jdbc plugin when running as a non root user

Hi, I am having some issues with figuring out what is happening here.
I am working on pushing some MS SQL data into elastic.

here is my jdbc configuration in logstash.

input {
    jdbc {
        jdbc_connection_string => "jdbc:sqlserver://${SQL_SERVER};database=${SQL_DATABASE};user=${SQL_USER};password=${SQL_PASSWORD}"
        jdbc_driver_class => "Java::com.microsoft.sqlserver.jdbc.SQLServerDriver"
        jdbc_driver_library => "/opt/logstash/logstash-core/lib/jars/sqljdbc41.jar"
        jdbc_user => "${SQL_USER}"
        jdbc_password => "${SQL_PASSWORD}"
        schedule => "* * * * *"
        statement => "SELECT * from ${SQL_TABLE}"
    }
}

Here is my DOCKERFILE:

FROM docker.elastic.co/logstash/logstash:5.6.9

RUN chown -R logstash:logstash /usr/share/logstash
RUN chmod -R 777 /usr/share/logstash

RUN rm -f /usr/share/logstash/pipeline/logstash.conf
ADD logstash.conf /usr/share/logstash/pipeline/
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD sqljdbc41.jar /opt/logstash/logstash-core/lib/jars/

I am passing a bunch of things with env variables in openshift, and it works in our QA environment just fine.
But in Prod it does not.

The only difference is that in Production all containers are not allowed to run as root.

So at first I was getting an error that the /usr/share/logstash/data was not a writable directory, which I did a chown and chmod for just to be safe, i guess.

An I was able to get past that.

But now I am getting this:


{ 211516 rufus-scheduler intercepted an error:
  211516   job:
  211516     Rufus::Scheduler::CronJob "* * * * *" {}
  211516   error:
  211516     211516
  211516     Errno::EACCES
  211516     Permission denied - /.logstash_jdbc_last_run
  211516       org/jruby/RubyFile.java:370:in `initialize'
  211516       org/jruby/RubyIO.java:871:in `new'
  211516       org/jruby/RubyIO.java:4058:in `write'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:110:in `write'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:48:in `write'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:273:in `execute_query'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:245:in `run'
  211516       org/jruby/RubyProc.java:281:in `call'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:234:in `do_call'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:258:in `do_trigger'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:300:in `start_work_thread'
  211516       org/jruby/RubyProc.java:281:in `call'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:299:in `start_work_thread'
  211516       org/jruby/RubyKernel.java:1479:in `loop'
  211516       /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:289:in `start_work_thread'
  211516   tz:
  211516     ENV['TZ']: 
  211516     Time.now: 2018-08-15 20:06:00 UTC
  211516   scheduler:
  211516     object_id: 209244
  211516     opts:
  211516       {:max_work_threads=>1}
  211516       frequency: 0.3
  211516       scheduler_lock: #<Rufus::Scheduler::NullLock:0x58ccfc41>
  211516       trigger_lock: #<Rufus::Scheduler::NullLock:0x4ae79c92>
  211516     uptime: 135.628 (2m15s627)
  211516     down?: false
  211516     threads: 2
  211516       thread: #<Thread:0x6f79d5cf>
  211516       thread_key: rufus_scheduler_209244
  211516       work_threads: 1
  211516         active: 1
  211516         vacant: 0
  211516         max_work_threads: 1
  211516       mutexes: {}
  211516     jobs: 1
  211516       at_jobs: 0
  211516       in_jobs: 0
  211516       every_jobs: 0
  211516       interval_jobs: 0
  211516       cron_jobs: 1
  211516     running_jobs: 1
  211516     work_queue: 0
}

The funny thing is that it still pushed data to elasticsearch... just this error showsup every minute (which is what my schedule cadence is)

I guess I am just trying to figure out what is wrong so I can don't encounter something I am not expecting later on in the project.

If you're running Logstash as a non-root user and the .logstash_jdbc_last_run file is configured to be stored in the root of the file system the EACCES error should be expected. The default location is $HOME/.logstash_jdbc_last_run so in your case I guess the HOME environment variable is unset. Can you try setting it? Or, modify the last_run_metadata_path option of the jdbc input.

1 Like

Thank you so much!
Doing either of those two things solved the issue!

P.S. Also, super quick response! - thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.