Logstash-filter-jdbc_streaming -- parameter issue calling stored procedure (MSSQL)

Logstash Version: 6.4 hosted on Oracle Linux Server release 6.8
Database: MSSQL 2017

Running into an issue with the way logstash-filter-jdbc_streaming adds parameters to statements. I am leveraging a stored procedure within my MSSQL database that takes a parameter and I keep getting a jdbc error when attempting to add said parameter using the "parameters =>" hash.

INFO error message
[2019-11-22T09:18:55,447][WARN ][logstash.filters.jdbcstreaming] Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::ComMicrosoftSqlserverJdbc::SQLServerException: Invalid length parameter passed to the LEFT or SUBSTRING function.>}

DEBUG error message
[2019-11-22T09:20:06,086][WARN ][logstash.filters.jdbcstreaming] Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::ComMicrosoftSqlserverJdbc::SQLServerException: Invalid length parameter passed to the LEFT or SUBSTRING function.>}
[2019-11-22T09:20:06,089][DEBUG][logstash.filters.jdbcstreaming] Executing JDBC query {:statement=>"SP_INCIDENT_BACKLOG_INFERENCE :inc", :parameters=>{:inc=>"INC000003787169"}}

Logstash filter plugin config:

jdbc_streaming {
      id => "JDBCSTREAM"
      jdbc_driver_library => "/etc/logstash/ancillary/jdbc/drivers/sqljdbc_6.0/enu/jre8/sqljdbc42.jar"
      jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
      jdbc_connection_string => ""
      jdbc_user => ""
      jdbc_password => ""
      #----Functions--> statement => "SP_INCIDENT_BACKLOG_INFERENCE INC000003787169"
      #----Functions--> statement => "SP_INCIDENT_BACKLOG_INFERENCE @INCIDENT=INC000003787169" 
      #----Functions--> statement => "Select * from INCIDENT_WRK_T1902 where incident_id = :inc"
      #----Broken-->statement => "SP_INCIDENT_BACKLOG_INFERENCE :inc" 
      #----Broken-->statement => "SP_INCIDENT_BACKLOG_INFERENCE @INCIDENT= :inc"    
      parameters => { "inc" => "incident_id" }
      target => "sql_results"
      #default_hash => {"[sql_results][0][notes]":"No notes when querying ODS.","[sql_results][0][resolution]":"No resolution when querying ODS"}
    }

The issue is occurring at this point in the ruby but I don't know enough about how the statement actually gets constructed and sent to troubleshoot any further.

/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-jdbc_streaming-1.0.4/lib/logstash/filters/jdbc_streaming.rb

def cache_lookup(event)
        params = prepare_parameters_from_event(event)
        @cache.get(params) do
          result = CachePayload.new
          begin

     Error Source -->  query = @database[@statement, params] # returns a dataset

            @logger.debug? && @logger.debug("Executing JDBC query", :statement => @statement, :parameters => params)
            query.all do |row|
    		
              result.push row.inject({}){|hash,(k,v)| hash[k.to_s] = v; hash} #Stringify row keys
            end
          rescue ::Sequel::Error => e
            # all sequel errors are a subclass of this, let all other standard or runtime errors bubble up
            result.failed!
            @logger.warn? && @logger.warn("Exception when executing JDBC query", :exception => e)
          end
          # if either of: no records or a Sequel exception occurs the payload is
          # empty and the default can be substituted later.
          result
        end
      end

Any help/insight would be greatly appreciated!

Thanks!

Alright so....a new day a fresh set of eyes :slight_smile:

Turns out that this error message seems to be benign. I tried to determine why the exception was being thrown but I had to move on.

Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::ComMicrosoftSqlserverJdbc::SQLServerException: Invalid length parameter passed to the LEFT or SUBSTRING function.>}

Yesterday as I was troubleshooting, our database was having performance issues so the stored procedure was taking a much longer time to execute. Had I waited 5 minutes I could have determined this and moved on to the next issue :frowning:

The Real Problem:
Once the jdbc query is executed and the results are returned to logstash I receive the following error message and the pipeline stops processing.

Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. 

     {
         :pipeline_id=>"dev_inference", 
         "exception"=>"Missing Converter handling for full class name=org.jruby.RubyObjectVar3, simple name=RubyObjectVar3", 
         "backtrace"=>[
            "org.logstash.Valuefier.fallbackConvert(Valuefier.java:97)", 
            "org.logstash.Valuefier.convert(Valuefier.java:75)", ....
            <could not post entire stack>
        ], 
        :thread=>"#<Thread:0x508dd494@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:157 sleep>"
    } 

A little googling on

Missing Converter handling for full class name=org.jruby.RubyObjectVar3

Lead me to an old discussion topic: https://discuss.elastic.co/t/problem-with-timestamp-and-timezone/148622

Which resulted in an issue being logged for the logstash-input-jdbc plugin:
https://github.com/logstash-plugins/logstash-input-jdbc/issues/302

Which lead me to guyboertje's comment pointing back to the logstash-filter-jdbc_streaming plugin: https://github.com/logstash-plugins/logstash-input-jdbc/issues/302#issuecomment-434327517

Given the conversation it seemed like this was a type issue, where one of my MSSQL types was not playing well with whatever this internal JRuby class (org.jruby.RubyObjectVar3) is doing. Going through some process of elimination I found the culprit. I had a single date type field being leveraged to remove the time value as it was not needed. All I had to do was cast this date type into our standard datetime2 type and everything was right with the world.

I am not sure if upgrading to a newer version of LS would solve this problem but currently that's not an option for me. Wanted to make sure this issue was documented in case someone else runs into it in the future.

Thanks!