Rufus-scheduler error: TZInfo::PeriodNotFound

Hi all,

I recently posted a question about my logstash pipeline at some point (seemingly) randomly not updating the jdbc metadata for sql_last_value substitution. For completeness, I'll include the input plugin of my configuration here. As output to ES is not the problem, I won't include the output or filtering here, but are available in the above link.

input {
        jdbc {
                jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
                jdbc_driver_library => "/usr/local/bin/ojdbc8.jar"
                jdbc_connection_string => "jdbc:oracle:thin:@<connectionstring>"
                jdbc_default_timezone => "Europe/Berlin"
                jdbc_user => "user"
                jdbc_password => "pass"
                statement_filepath => "/path/to/update.sql"
                schedule => "* * * * *"
                last_run_metadata_path => "/path/to/.logstash_jdbc_update"
     }
}

I've since dug into the logs and found something of use, the following error. Note the timestamps:

Jul 15 02:24:11 <user> logstash[840964]: { 11402 rufus-scheduler intercepted an error:
Jul 15 02:24:11 <user> logstash[840964]:   11402   job:
Jul 15 02:24:11 <user> logstash[840964]:   11402     Rufus::Scheduler::CronJob "* * * * *" {}
Jul 15 02:24:11 <user> logstash[840964]:   11402   error:
Jul 15 02:24:11 <user> logstash[840964]:   11402     11402
Jul 15 02:24:11 <user> logstash[840964]:   11402     Sequel::InvalidValue
Jul 15 02:24:11 <user> logstash[840964]:   11402     TZInfo::PeriodNotFound: 2022-03-27 02:27:00 is an invalid local time.
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/tzinfo-2.0.4/lib/tzinfo/timezone.rb:500:in `period_for_local'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/extensions/named_timezones.rb:115:in `convert_input_datetime_other'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/timezones.rb:136:in `convert_input_datetime_no_offset'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/timezones.rb:185:in `convert_input_timestamp'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/timezones.rb:94:in `convert_timestamp'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/database/misc.rb:306:in `to_application_timestamp'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:715:in `timestamp_convert'
Jul 15 02:24:11 <user> logstash[840964]:   11402       org/jruby/RubyMethod.java:123:in `call'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:825:in `process_result_set'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:755:in `block in fetch_rows'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:268:in `block in execute'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:704:in `statement'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:263:in `block in execute'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/connection_pool/threaded.rb:92:in `hold'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/database/connecting.rb:269:in `synchronize'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:262:in `execute'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/dataset/actions.rb:1093:in `execute'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/adapters/jdbc.rb:755:in `fetch_rows'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sequel-5.44.0/lib/sequel/dataset/actions.rb:152:in `each'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-jdbc-5.0.7/lib/logstash/plugin_mixins/jdbc/statement_handler.rb:41:in `perform_query'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-jdbc-5.0.7/lib/logstash/plugin_mixins/jdbc/jdbc.rb:217:in `execute_statement'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-jdbc-5.0.7/lib/logstash/inputs/jdbc.rb:316:in `execute_query'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-integration-jdbc-5.0.7/lib/logstash/inputs/jdbc.rb:279:in `block in run'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:234:in `do_call'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:258:in `do_trigger'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:300:in `block in start_work_thread'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:299:in `block in start_work_thread'
Jul 15 02:24:11 <user> logstash[840964]:   11402       org/jruby/RubyKernel.java:1442:in `loop'
Jul 15 02:24:11 <user> logstash[840964]:   11402       /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:289:in `block in start_work_thread'
Jul 15 02:24:11 <user> logstash[840964]:   11402   tz:
Jul 15 02:24:11 <user> logstash[840964]:   11402     ENV['TZ']:
Jul 15 02:24:11 <user> logstash[840964]:   11402     Time.now: 2021-07-15 00:24:11 +0000
Jul 15 02:24:11 <user> logstash[840964]:   11402   scheduler:
Jul 15 02:24:11 <user> logstash[840964]:   11402     object_id: 2058
Jul 15 02:24:11 <user> logstash[840964]:   11402     opts:
Jul 15 02:24:11 <user> logstash[840964]:   11402       {:max_work_threads=>1}
Jul 15 02:24:11 <user> logstash[840964]:   11402       frequency: 0.3
Jul 15 02:24:11 <user> logstash[840964]:   11402       scheduler_lock: #<Rufus::Scheduler::NullLock:0x4924933b>
Jul 15 02:24:11 <user> logstash[840964]:   11402       trigger_lock: #<Rufus::Scheduler::NullLock:0x617e4ace>
Jul 15 02:24:11 <user> logstash[840964]:   11402     uptime: 140203.227072 (1d14h56m43s227)
Jul 15 02:24:11 <user> logstash[840964]:   11402     down?: false
Jul 15 02:24:11 <user> logstash[840964]:   11402     threads: 2
Jul 15 02:24:11 <user> logstash[840964]:   11402       thread: #<Thread:0x34182421>
Jul 15 02:24:11 <user> logstash[840964]:   11402       thread_key: rufus_scheduler_2058
Jul 15 02:24:11 <user> logstash[840964]:   11402       work_threads: 1
Jul 15 02:24:11 <user> logstash[840964]:   11402         active: 1
Jul 15 02:24:11 <user> logstash[840964]:   11402         vacant: 0
Jul 15 02:24:11 <user> logstash[840964]:   11402         max_work_threads: 1
Jul 15 02:24:11 <user> logstash[840964]:   11402       mutexes: {}
Jul 15 02:24:11 <user> logstash[840964]:   11402     jobs: 1
Jul 15 02:24:11 <user> logstash[840964]:   11402       at_jobs: 0
Jul 15 02:24:11 <user> logstash[840964]:   11402       in_jobs: 0
Jul 15 02:24:11 <user> logstash[840964]:   11402       every_jobs: 0
Jul 15 02:24:11 <user> logstash[840964]:   11402       interval_jobs: 0
Jul 15 02:24:11 <user> logstash[840964]:   11402       cron_jobs: 1
Jul 15 02:24:11 <user> logstash[840964]:   11402     running_jobs: 1
Jul 15 02:24:11 <user> logstash[840964]:   11402     work_queue: 0
Jul 15 02:24:11 <user> logstash[840964]: } 11402 .

When I look my record of generated JDBC timestamps that Logstash creates, I see:

[ Date / Time Created] --- !ruby/object:DateTime 'GENERATED TIMESTAMP'
2021-07-15 02:23:01 --- !ruby/object:DateTime '2021-07-15 00:21:54.000000000 Z'
2021-07-15 02:24:01 --- !ruby/object:DateTime '2021-07-15 00:22:56.000000000 Z'
2021-07-15 02:25:01 --- !ruby/object:DateTime '2021-07-15 00:22:56.000000000 Z'
2021-07-15 02:26:01 --- !ruby/object:DateTime '2021-07-15 00:22:56.000000000 Z'
... 
2021-07-15 16:32:01 --- !ruby/object:DateTime '2021-07-15 00:22:56.000000000 Z'

Where --- !ruby/object:DateTime 'GENERATED TIMESTAMP' is contained within "/path/to/.logstash_jdbc_update".

So in the same pipeline cycle the scheduler throws this error, the pipeline fails to schedule the next cycle. Coincidence? I think not. However, this part of the rufus scheduler error really throws me off.

TZInfo::PeriodNotFound: 2022-03-27 02:27:00 is an invalid local time.

Where is the scheduler getting this timestamp from? Has anyone any experience with this?

Any help / direction is greatly appreciated!

I cannot say why the scheduler is generating that time, but the reason it is invalid is probably that your server is in a location (e.g. Albania, or Denmark) where daylight savings time starts at 02:00 on 2022/3/27, and 02:00:00 is immediately followed by 03:00:01. 02:27:00 does not happen.

Strangely, as the recorded JDBC timestamps show, the timestamps are written at least 21 minutes past 2am CEST. I would understand that Logstash prevents the timestamp write if it did it for all times past 2am, but if it's already writing at 2:21am, why stop suddenly at 2:27 due to an issue that will become a problem on the 27th of March, 2022?

I assume that the jdbc plugin can be used on a minute schedule, but wherein lies the problem with my implementation? Do I perhaps need to incorporate the timezone into the schedule, as here with the cited example 0 6 * * * America/Chicago?

I have in the meantime found the reason for the problem, albeit an unexpected one that took a bit of digging into our database data.

The pipeline was attempting to rewrite all timestamps from the database into the timezone CET. This was, as @Badger has indicated, a problem because any timestamp between 2 and 3 in the morning on the 27th of march will not exist, as time will roll forward. So, how did we deal with this?

After some testing, it seemed that the conflict was occurring between how our machines were storing / retrieving timestamps from the database. So we removed jdbc timezone information from our configuration, which then defaults the timestamp to UTC. Then, looking into our database properties, we noticed that our DB is set in UTC, so we should never have been using this argument, as according to the jdbc plugin docu, If your database timezone is UTC then you do not need to set either of these settings.

The remaining step was to find another strategy of saving the metadata timestamp, so that it too was saved in UTC. After struggling to get the query run timestamp to be saved in UTC and understood as UTC in the next scheduled run, we ultimately decided on saving the timestamp by tracking the modification timestamp column of our jdbc requests. In other words, the metadata timestamps are now saved in a timestamp format from the DB Column, as opposed to the timestamp Logstash saves in local time.

input {
        jdbc {
                jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
                jdbc_driver_library => "/usr/local/bin/ojdbc8.jar"
                jdbc_connection_string => "jdbc:oracle:thin:@<DB ADDRESS>"
                jdbc_user => "<user>"
                jdbc_password => "<pass>"
                statement_filepath => "<path>/<to>/<SQL file>"
                schedule => "* * * * *"
                last_run_metadata_path => "<path>/<to>/<jdbc_metadata_file>"
                enable_metric => false
                tracking_column => "date_changed"
                use_column_value => true
                tracking_column_type => "timestamp"
     }
}

To any struggling with timestamp issues, I would suggest finding a way to bring all your timestamps into a singular TZ to avoid this struggle, especially if you're using these metadata timestamps in your SQL WHERE clause. UTC is naturally the TZ of choice, as you won't have to worry about these DST problems from timestamps that don't exist.

I hope this helps! If confusion arises from this description, please let me know so I can make this more readable. These TZ issues are extremely frustrating.

Good luck!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.