Logstash Crashing when running via systemctl vs running it on command line

I have a Logstash process that's running to delete documents from a Elasticsearch index when they are actually updated in the underlying db. It runs fine when I run it via command line like below:
sudo /usr/share/logstash/bin/logstash -f qa/pipeline-sql-qa-delete-product.cfg --path.data /tmp/qa/ --config.debug --log.level=debug

It doesn't work when I am running the same logstash command via systemctl service. The logstash process dies with errors.
sudo /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/qa --path.data /tmp/qa/ --path.settings /etc/logstash/qa_settings_dir --path.logs /var/log/logstash/qa --config.debug --log.level=debug &

Information from logs:

[2018-04-16T22:15:00,772][ERROR][logstash.pipeline ] Error registering plugin {:plugin=>"<LogStash::Inputs::Jdbc jdbc_driver_library=>"sqljdbc4.jar", jdbc_driver_class=>"com.microsoft.sqlserver.jdbc.SQLServerDriver", jdbc_connection_string=>"jdbc:sqlserver://10.147.4.35:1433;databaseName=www;", jdbc_user=>"AppFamily_CMS", jdbc_password=>, schedule=>"* * * * ", statement=>"SELECT p.product_id, p.lastUpdate as lastupdate FROM product p WHERE pshortdesc = ''\n GROUP BY p.product_id, p.lastUpdate\n Having p.lastUpdate > :sql_last_value", type=>"wwwproducts", use_column_value=>true, tracking_column=>"lastupdate", tracking_column_type=>"timestamp", last_run_metadata_path=>"qa_wwwproducts_delete_last_run", lowercase_column_names=>true, record_last_run=>true, id=>"b86745c531739c682cff837d08d0ddbe4a5b32a8-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_12a91125-fd14-48ed-8e5a-e2c6548405b7", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, clean_run=>false>", :error=>"uninitialized constant ThreadSafe::JRubyCacheBackend"}

[2018-04-16T22:15:06,993][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<NameError: uninitialized constant ThreadSafe::JRubyCacheBackend>, :backtrace=>["org/jruby/RubyModule.java:2746:in const_missing'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/thread_safe-0.3.6-java/lib/thread_safe/cache.rb:12:in ThreadSafe'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/thread_safe-0.3.6-java/lib/thread_safe/cache.rb:3:in (root)'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/tzinfo-1.2.4/lib/tzinfo/timezone.rb:1:in (root)'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/tzinfo-1.2.4/lib/tzinfo/timezone.rb:649:in init_loaded_zones'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/tzinfo-1.2.4/lib/tzinfo/timezone.rb:651:in Timezone'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/tzinfo-1.2.4/lib/tzinfo/timezone.rb:46:in TZInfo'", "org/jruby/RubyKernel.java:1040:in require'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65:in require'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/tzinfo-1.2.4/lib/tzinfo/timezone.rb:5:in (root)'", "org/jruby/RubyKernel.java:1040:in require'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65:in require'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/tzinfo-1.2.4/lib/tzinfo.rb:1:in (root)'", "org/jruby/RubyKernel.java:1040:in require'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65:in require'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/tzinfo-1.2.4/lib/tzinfo.rb:28:in (root)'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:1:in (root)'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:28:in (root)'", "org/jruby/RubyArray.java:1613:in each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.3.1/lib/logstash/inputs/jdbc.rb:1:in (root)'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.3.1/lib/logstash/inputs/jdbc.rb:206:in register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:281:in register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in register_plugins'"]}

Is there a reason why this is failing even though both are same commands and one works fine it ran on command line whereas other ran as a service errors out?

Adding the output part of the configuration file below:

output {
if [type] == "wwwproducts" {
elasticsearch {
hosts => [ "https://search-pcms-es-dev-jrnes5bars2m6famzbk446d67m.us-west-2.es.amazonaws.com:443" ]
index => "allproductsdev"
document_id => "%{product_id}"
action => "delete"
}
}
}

The stack trace on the 'Pipeline aborted due to error' suggest the rufus scheduler has an issue finding the timezone. Can you post/confirm exactly what that schedule option looks like on the input? Should it be 5 *'s rather than 4?

Thanks for pointing it out, that seems to be the issue!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.