'No operations allowed after connection closed' errors during Filter phase

Hello friends,

We are facing a critical issue with our Logstash indexing pipeline during the filtering phase, which executes some very heavy queries on our application DB.

Our business application is executing constant write operations on the DB and is triggering a Solr server to index constantly. This goes on for several hours.

The issue occurs when the Logstash pipeline kicks in while that process is active, and its filtering queries try to connect to the DB too.

We immediately see numerous WARN messages containing errors like: "Sequel::DatabaseError: Java::JavaSql::SQLNonTransientConnectionException: No operations allowed after connection closed.":

The problem is only resolved by restarting Logstash, otherwise it keeps throwing those errors forever. Once restarted, it seems to operate smoothly.

Can you please think of a reason for this phenomenon and possibly a solution? We have tried with increasing timeouts, introducing paging, downgrading the JDBC driver, but nothing has helped.

Our present stack is:

  • Logstash 8.5.1 with JDBC driver: mysql-connector-j-8.1.0.jar
  • MySQL/Percona Server (GPL), Release 25, Revision 60c9e2c5

Thank you very kindly.

Which parameters have you set on the jdbc_streaming plugin? Can you provide full connection string without user/pass?

Have you try to change the parameters: jdbc_validate_connection and jdbc_validation_timeout?

1 Like

Thanks for your reply, Rios!

I will paste here one of our Filter queries so you can see the parameters:

        jdbc_streaming {
            jdbc_driver_library => "/mysql/mysql-connector-j-8.1.0.jar"
            jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
            jdbc_connection_string => "${JDBC_CONNECTION_STRING}"
            jdbc_user => "${JDBC_USER}"
            jdbc_password => "${JDBC_PASSWORD}"
            statement => "
                SELECT COUNT(dm.id) AS freq, COALESCE(MIN(dm.dtgrant), '') AS legacy_first_use,
                    COALESCE(MAX(dm.dtgrant), '') AS legacy_last_use
                FROM dterm dt
                    JOIN ltermtrconc l ON dt.id = l.idterm
                    JOIN dtrconcept conc ON l.idtrconcept = conc.id
                    JOIN dmark dm ON dm.id = conc.idmark
                WHERE idterm = :term_id
                    AND dm.dtgrant > DATE_SUB(current_date(), INTERVAL 2 YEAR);
            "
            parameters => { "term_id" => "id" }
            target => "freq_data"
        }

Yes, we did try to fiddle with jdbc_validate parameters, but it didn’t fix the problem. We tested with:

jdbc_validate_connection = true
jdbc_validation_timeout = 60

So far, the only workaround that we found is to stop Logstash until the other process is done with the DB and then start it.

It’s puzzling how Logstash is able to overcome these connection errors once restarted. It’s as if something is reinitialising and works well from then on. But it doesn’t make sence, as the pipelines do not keep any state or connections between runs. Maybe Logstash itself maintains something but we can’t figure out what.

Thanks again!

There is a suggestion here, to add autoReconnect

jdbc_connection_string => "jdbc:mysql://host:3306/mydatabase?autoReconnect=true"

1 Like

Thank you! Yes, autoReconnect=true has been tried too.