Relational inputs of logstash in jdbc plugin

hi all,
i am using following configuration in logstash:

input {

jdbc {
    jdbc_driver_library => "F:\driver\sqljdbc_6.0\enu\jre8\sqljdbc42.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://local:3333;databaseName=TotalTXN"
    jdbc_user => "sss"
    jdbc_password => "sss"
	clean_run => "true"
    statement => "DECLARE @DDate  CHAR(10)
select @DDate=MAX(Date) from TotalTXN.dbo.TotalTxn_Control
SELECT TotalTXN.dbo.d2md(@DDate) as gdate"
 }

jdbc {
    jdbc_driver_library => "F:\driver\sqljdbc_6.0\enu\jre8\sqljdbc42.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://local:3333;databaseName=TotalTXN"
    jdbc_user => "sss"
    jdbc_password => "sss"
	clean_run => "true"
    statement => "
select TOP 2 [BaseDate_Accept_Setel]
      from dbo.vw_Total where (BaseDate_Accept_Setel=(SELECT REPLACE(MAX(Date),'/','') from TotalTXN.dbo.TotalTxn_Control)) AND (BaseDate_Accept_Setel> :sql_last_value)
	"
use_column_value => "true"
tracking_column => "BaseDate_Accept_Setel"
}

}
filter {
         

mutate {

	gsub => [
		"gdate", "/", "-"
		]
		}
}

output {
  elasticsearch { 
    hosts => ["http://192.168.170.153:9200"]
    index => "a2_%{gdate}"
    user => "logstash_internal25"
    password => "x-pack-test-password"
 }
  stdout { codec => rubydebug }
}

when logstash be started, the data of first jdbc input indexed by value of gdate, but data of second jdbc input indexed as word "a2_%{gdate}" instead of value of gdate
actually, it seems , when data in second jdbc is selecting, it has no information of gdate's value.
now, how can i handle this issue? i want that output data of second jdbc input be indexed same as first jdbc input. could you please advise me about this?

Each input is run independently. I do not think there is any order-of-execution guarantee. You might be able to add gdate to the events from the second jdbc input with either a jdbc_static or jdbc_streaming filter.

many thanks,
I chanmged my configuration as following:

input {
jdbc {
    jdbc_driver_library => "F:\driver\sqljdbc_6.0\enu\jre8\sqljdbc42.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://local:3333;databaseName=TotalTXN"
    jdbc_user => "sss"
    jdbc_password => "sss"
	clean_run => "true"
    statement => "
select TOP 2 [BaseDate_Accept_Setel]
      from dbo.vw_Total where (BaseDate_Accept_Setel=(SELECT REPLACE(MAX(Date),'/','') from TotalTXN.dbo.TotalTxn_Control)) AND (BaseDate_Accept_Setel> :sql_last_value)
	"
use_column_value => "true"
tracking_column => "BaseDate_Accept_Setel"
}

}
filter {
         
jdbc_streaming {
    jdbc_driver_library => "F:\driver\sqljdbc_6.0\enu\jre8\sqljdbc42.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://local:3333;databaseName=TotalTXN"
    jdbc_user => "sss"
    jdbc_password => "sss"
    statement => "DECLARE @DDate  CHAR(10)
select @DDate=MAX(Date) from TotalTXN.dbo.TotalTxn_Control
SELECT TotalTXN.dbo.d2md(@DDate) as gdate"
target => "gdate"
 }

mutate {

	gsub => [
		"gdate", "/", "-"
		]
		}
}

output {
  elasticsearch { 
    hosts => ["http://192.168.170.153:9200"]
    index => "a6_%{gdate}"
    user => "logstash_internal25"
    password => "x-pack-test-password"
 }
  stdout { codec => rubydebug }
}

but it give me following error:
[2019-02-24T17:42:55,369][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"a6_{gdate=2019/02/23}", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x2342cdee>], :response=>{"index"=>{"_index"=>"a6_{gdate=2019/02/23}", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [a6_{gdate=2019/02/23}], must not contain the following characters [ , \", *, \\, <, |, ,, >, /, ?]", "index_uuid"=>"_na_", "index"=>"a6_{gdate=2019/02/23}"}}}}

actually, it seems mutate filter is not working

If a jdbc_streaming filter is configured with

target => "foo"
statement => "select ... as bar"

then foo is an array of bar. So in your case the data you get back is the equivalent of

{ "gdate" : [ { "gdate": "2019/02/23" } ] }

So this should work

    mutate { replace => { "gdate" => "%{[gdate][0][gdate]}" } }
    mutate { gsub => [ "gdate", "/", "-" ] }

it works.
many many thanks.

can i ask another question?

I am using tracking_column to have a query checkpoint. it is expected to save the last value of "basedate_accept_setel" in the sql_last_value, but in the .logstash_jdbc_last_run , i can just see the value "--- 0" while it should have value fore example as "20190101". how can i handle this issue?

I suggest you start a new thread for that new question.

thanks for your reply. actually, i asked before, and waiting for inspiring advice

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.