Hi,
we have a 6.2.2 Logstash running 8 pipelines on a linux machine, one of them uses a scheduled JDBC input and everything runs fine. We updated logstash to version 6.6.0 and now we are having problems, one of the scheduled JDBC input stops running (at least it seems to stop, because we can't see the query being executed in the log and elasticsearch is not updated).
The pipeline configuration is:
input { jdbc { jdbc_driver_library => "/usr/share/logstash/gerencial/lib/jdbc/openedge.jar" jdbc_driver_class => "com.ddtek.jdbc.openedge.OpenEdgeDriver" jdbc_connection_string =>"jdbc:datadirect:openedge://10.7.0.7:DatabaseName=dbname1;initializationString=SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;" jdbc_user => "USER" jdbc_password => "passwd" schedule => "* * * * *" statement_filepath => "/usr/share/logstash/sql/case1.sql" last_run_metadata_path => "/usr/share/logstash/.case1_logstash_jdbc" use_column_value => true tracking_column => "track_column" } jdbc { jdbc_driver_library => "/usr/share/logstash/gerencial/lib/jdbc/openedge.jar" jdbc_driver_class => "com.ddtek.jdbc.openedge.OpenEdgeDriver" jdbc_connection_string =>"jdbc:datadirect:openedge://10.7.0.7:DatabaseName=dbname2;initializationString=SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;" jdbc_user => "USER" jdbc_password => "passwd" schedule => "* * * * *" statement_filepath => "/usr/share/logstash/sql/case2.sql" last_run_metadata_path => "/usr/share/logstash/.case2_logstash_jdbc" use_column_value => true tracking_column => "track_column" } } filter { ... } output { elasticsearch { hosts => ["http://10.1.0.10:9200"] user => "logstash" password => "passwd" action => 'update' document_id => '%{id}' doc_as_upsert => 'true' index => "calculo_%{+YYYY.MM}" } }
As can be seen above, we have two scheduled JDBC input in this pipeline, case1 and case2. case1, which query returns more data than in case2, stops running some times during the day and we have to manually restart logstash.
Please let me know what could be the issue. Thanks !