Logstash JDBC Input Plugin Connection Pooling

Our team is actively using Logstash JDBC Input plugin with MSSQL JDBC driver for reading some data from DB for further processing.
Logstash version: 8.5.1
MSSQL JDBC version: 11.2.1.jre17

At the moment we are facing some connectivity issues. To understand the root cause of the issues we would like to clarify the following points related to connection pooling.

  1. Does the plugin actually support connection pooling? I can see some configuration settings related to connection pool. But it looks like in our case each pipeline execution creates a new connection. Is there any pipeline configuration setting that enables connection pooling? I know that Logstash uses sequel under the hood to access database and it looks like it uses connection pools by default. So is it a correct statement that each pipeline execution creates a new connection pool each time?
  2. If the plugin supports connection pooling, does each pipeline has its own connection pool or is it shared across all pipelines?

I didn't managed to find answers to the questions in the existing documentation.

Hi @alromos,

The plugin does support pooling but it is dependent upon the underlying driver as per the documentation.

Hi @Wave
Thank you for the answer.
However I don't see any mention of relation between connection pool support and the underlying driver on the page you linked.
I may only see that we may specify the following settings related to connection pooling:

  • jdbc_pool_timeout
  • jdbc_validation_connection
  • jdbc_validation_timeout

And some additional connection pool settings may also be passed via sequel_opts setting.
It's not mentioned that connection pool support depends on vendor.

Sure, I probably wasn't very clear. So the jdbc input supports connection pooling, but that is dependent upon the underlying driver to also support it. If you happen to use a driver that doesn't have it then it won't matter what you specify for those options if that makes sense.

I doubt your statement is true.
Connection pooling is handled by sequel ruby library which wraps JDBC driver. So, I believe the only requirement to have connection pooling support is to implement JDBC API. Driver does not have to have explicit connection pooling support.

So, taking into account your answers and my observations I would like to summarize my findings.

  1. Does the plugin actually support connection pooling?
    Yes, plugin supports connection pooling.
    It looks like I saw a connection per pipeline execution because logstash version I use does not contain a fix of the issue:
    Connection leak on pipeline reload when using Oracle driver · Issue #118 · logstash-plugins/logstash-integration-jdbc · GitHub
    After switching to the newer plugin (5.4.1 or higher) version I can see that connections are created properly.
  2. If the plugin supports connection pooling, does each pipeline has its own connection pool or is it shared across all pipelines?
    Each pipeline has it's own connection pool and it's not shared across all pipelines. So if logstash instance has 30 pipelines with JDBC input plugin there will be 30 connection pools with a single connection inside.
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.