Jdbc_stating trowing raise_pool_timeout exception

Hello everyone,
I am using jdbc_static to populate my lookup table and, when I have increased my pipeline workers from 8 to 32, I am starting receiving this error:

Exception when executing Jdbc query {:lookup_id=>"talog-filter_loadingchaincode_v1.0", :exception=>"timeout: 5.0, elapsed: 14.633712253998965", :backtrace=>["C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/connection_pool/threaded.rb:280:in `raise_pool_timeout'", "C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/connection_pool/threaded.rb:153:in `acquire'", "C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/connection_pool/threaded.rb:91:in `hold'", "C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/database/connecting.rb:283:in `synchronize'", "C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/adapters/jdbc.rb:260:in `execute'", "C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/dataset/actions.rb:1197:in `execute'", "C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/adapters/jdbc.rb:757:in `fetch_rows'", "C:/LS/logstash-8.17.3/vendor/bundle/jruby/3.1.0/gems/sequel-5.87.0/lib/sequel/dataset/actions.rb:164:in `each'"]}

I have read that I need to increase the max pool connection size and the pool timeout but sequel_opts are not recognized a correct parameter for jdbc_static.

Could you guys help me?

Hello @Francesco_Esposito

Could you please check below URL's if it could help :

If we use connection_timeout to 15


input {
  jdbc {
    jdbc_driver_library => "path/to/your/jdbc/driver.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://localhost:3306/your_database"
    jdbc_user => "your_username"
    jdbc_password => "your_password"
    statement => "SELECT * FROM your_table"
    max_connections => 32
    connection_timeout => 15
  }
}

Thanks!!

Hey @Tortoise I am using a jdbc_static for data enrichment, not a jdbc input plugin, unfortunately. The jdbc_static doesn't include max_connections or connection_timeout.
So, it's not viable.

Are you running the loader on a schedule or does this error just occur at startup? Do you get multiple errors like that? If so, can you show them all?

Loader on schedule, the error refers to the query to select from the memory table (talog-filter_chaincode_loading_v1.0), not the query on the SQL Server Database, that looks correct.
This is the configuration:

		jdbc_static {
			id => "talog-filter_jdbc_static_v1.0"
			jdbc_driver_library => "C:\LS\connectors\mssql-jdbc-12.10.0.jre11.jar"
			jdbc_connection_string => "connection string"
			jdbc_user => "user"
			jdbc_password => "password"
			jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
			loader_schedule => "*/30 * * * *"
			local_db_objects => [
				{
					name => "localstorechaincode"
					index_columns => ["rnid"]
					columns => [
						["rnid", "varchar(100)"],
						["chaincode", "varchar(100)"]
					]
				}
			]
			loaders => [
				{
					id => "talog-filter_chaincode_dbquery_v1.0"
					query => "select cast(s.systemcode as varchar(100)) RNID, rtrim(s.ChainCode) ChainCode
							    from store s
							   where s.CHAINCODE is not null
							   order by s.SYSTEMCODE"
					local_table => "localstorechaincode"
				}
			]
			local_lookups => [
				{
					id => "talog-filter_chaincode_loading_v1.0"
					query => "select case when chaincode = 'NULL' then '' else chaincode end AS chaincode 
							 from localstorechaincode 
							 where rnid = :RNID"
					parameters => { 
									RNID => "[@metadata][_account]"
								}
					target => "[@metadata][loadattr1]"
					default_hash => {
						chaincode => ""
					}
				}
			]
			add_field => { "[@metadata][wschaincode]" => "%{[@metadata][loadattr1][0][chaincode]}" }
			tag_on_failure => []
			tag_on_default_use => []
		}

We have a bunch of subsequential errors.

When I have increased my pipeline workers from 8 to 32, I am starting receiving this error that you can see in the original message.

Well it's not obvious to me how an indexed lookup on an in-memory table could take 15 seconds. I would start looking at thread dumps to see what all the worker threads are working on. My understanding is that each worker thread has its own copy of the table, but perhaps they are all waiting on oneanother for something.