Logstash JDBC input plugin: Stopped execution without any error log

Hi Team,

I am using Logstash 7.17 as a Kubernetes Deployment (replicas - 2). This logstash has only single pipeline which takes JDBC input from Oracle Database and indexes the records into Elasticsearch 7.17 (running on ECK).

During this process, when logstash is running fine, we're able to see a query like below where it's taking count on the table and then reading the data. However, we are running into an issue where the logstash just stops executing the query and there are no more logs after the execution stop.

Pod 1:

[INFO ] 2023-04-17 00:26:27.932 [Ruby-0-Thread-17@[285ae935f84f361ab655fb71083da72b277e4d47ae6351f59600eec215f743f8]<jdbc__scheduler_worker-00: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:284] jdbc - (6.669865s) SELECT count(*) "COUNT" FROM (SELECT * FROM table WHERE mod_datt_bu > sysdate-1 and mod_datt_bu > TIMESTAMP '2023-04-17 00:25:30.000000 +00:00' ORDER BY cre_datt_bu) "T1" FETCH NEXT 1 ROWS ONLY

Pod 2

[INFO ] 2023-04-17 00:26:29.619 [Ruby-0-Thread-17@[285ae935f84f361ab655fb71083da72b277e4d47ae6351f59600eec215f743f8]<jdbc__scheduler_worker-00: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:284] jdbc - (1.538829s) SELECT count(*) "COUNT" FROM (SELECT * FROM table WHERE mod_datt_bu > sysdate-1 and mod_datt_bu > TIMESTAMP '2023-04-17 00:25:30.000000 +00:00' ORDER BY cre_datt_bu) "T1" FETCH NEXT 1 ROWS ONLY

Key points:

  1. There were 2 pods running. Both the logstash stopped executing any query around the same time. Refer above logs.
  2. The pods were still in Running phase.
  3. There was no observed stress on logstash pods (in terms of CPU utilization, memory consumption).
  4. No database anomaly has been observed (in terms of active sessions, CPU utilization, etc).
  5. No database connectivity has been observed as other application pods (interaction with the same database) were working fine.

Requesting further insight into the issue. Also, how can we make logstash more robust in such scenarios.

FYI: I started a 3rd pod which worked fine without any issues. This identified that upon process restart/re-initialization, there are no issues.

Thanks,
Aditya

Hi Team,

Currently the logstash is working as expected. However, any further insight and support on the topic would be really appreciated.

We're trying to sync a large dataset on a high-frequency. That's why, it is important for no-sync issues.

Regards,
Aditya

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.