I have connected to a database via logstash filter jdbc_static and input jdbc
I have loaded the tables and defined the local_db_objects objects.
I have created a query in the local_lookups and when I just join tables together everything is fine and data is flowing. But when I change it to a left join then logstash stalls and nothing happens for minutes
local_lookups => [ { id => "rawlogfile" query => " SELECT l_AE.datetime_ , l_AP.role_ , l_AP.reference_ , l_AE.type_ , l_Org.name_ , l_AP.code_ , l_AE.myUnqualifiedId FROM l_AE JOIN l_AP ON l_AP.myElementSpecificId_AuditEventFHIR = l_AE.myUnqualifiedId AND l_AP.myUnqualifiedVersionId_AuditEventFHIR = l_AE.unqualifiedversionid__ JOIN l_Org ON l_AP.reference_ = l_Org.myUnqualifiedId " target => "sql_output" } ]
In the terminal I see the following
[2019-08-06T09:39:13,865][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.8.0"}
[2019-08-06T09:39:23,256][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-08-06T09:39:23,287][INFO ][logstash.filters.jdbcstatic] derby.system.home is: C:\Users\N1XERW
[2019-08-06T09:39:27,147][INFO ][logstash.filters.jdbc.readwritedatabase] loader AuditEventFHIR, fetched 186311 records in: 2.391 seconds
[2019-08-06T09:39:29,960][INFO ][logstash.filters.jdbc.readwritedatabase] loader AuditEventFHIR, saved fetched records to import file in: 2.813 seconds
[2019-08-06T09:39:31,381][INFO ][logstash.filters.jdbc.readwritedatabase] loader AuditEventFHIR, imported all fetched records in: 1.421 seconds
[2019-08-06T09:39:40,495][INFO ][logstash.filters.jdbc.readwritedatabase] loader AuditEventFHIR_Participant, fetched 362087 records in: 9.098 seconds
[2019-08-06T09:39:43,074][INFO ][logstash.filters.jdbc.readwritedatabase] loader AuditEventFHIR_Participant, saved fetched records to import file in: 2.579 seconds
[2019-08-06T09:39:47,058][INFO ][logstash.filters.jdbc.readwritedatabase] loader AuditEventFHIR_Participant, imported all fetched records in: 3.984 seconds
[2019-08-06T09:39:47,152][INFO ][logstash.filters.jdbc.readwritedatabase] loader OrganizationFhir, fetched 800 records in: 0.094 seconds
[2019-08-06T09:39:47,183][INFO ][logstash.filters.jdbc.readwritedatabase] loader OrganizationFhir, saved fetched records to import file in: 0.031 seconds
[2019-08-06T09:39:47,230][INFO ][logstash.filters.jdbc.readwritedatabase] loader OrganizationFhir, imported all fetched records in: 0.047 seconds
[2019-08-06T09:39:47,339][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x79511f9e run>"}
[2019-08-06T09:39:47,386][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-08-06T09:39:47,777][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
C:/Logstash/logstash-6.8.0/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated
[2019-08-06T09:40:00,456][INFO ][logstash.inputs.jdbc ] (0.002241s) SELECT * FROM AuditEventFHIR WHERE myUnqualifiedId = '0000134b-fc7f-4c3a-b681-8150068d6dbb'
[2019-08-06T09:41:00,394][INFO ][logstash.inputs.jdbc ] (0.000984s) SELECT * FROM AuditEventFHIR WHERE myUnqualifiedId = '0000134b-fc7f-4c3a-b681-8150068d6dbb'
[2019-08-06T09:42:00,098][INFO ][logstash.inputs.jdbc ] (0.000740s) SELECT * FROM AuditEventFHIR WHERE myUnqualifiedId = '0000134b-fc7f-4c3a-b681-8150068d6dbb'
And I just get a new SELECT * FROM.... line every minut here after (this is the input-jdbc query) . IF i have made a left or a left outer join. With a normal join logstash works fine, but the data is wrong.