"No current connection" from jdbc_static

Hello, I'm trying to set up a logstash pipeline that does an enrichment lookup via the jdbc_static plugin, calling a MariaDB database via the MySQL jdbc connector. Aside - I tried doing it initially with the MariaDB connector, but that doesn't appear to be supported at this time.

It looks like it's initially able to connect with the config as is (no complaints about the table or database, which I saw when I had minor mistakes in my configuration), but I see the following error when we get to filling the Derby database:

LogStash::Filters::Jdbc::LoaderJdbcException: Exception when filling lookup db from loader query results, original exception: Java::JavaSql::SQLNonTransientConnectionException, original message: No current connection

There's no other context given, even with log.level at 'trace'. The closest comparison to what I'm seeing was this github link:

I'm using logstash and filebeat 6.2.4, so this should be fixed/not the issue. Any idea what I might be missing? I've checked the MariaDB and it's up and running, and I can connect with the same user and password I'm configured for in the pipeline.

I get the same error. And at that same time, I see this error in derby.log:

java.lang.StackOverflowError
at org.apache.derby.impl.sql.compile.TableOperatorNode.modifyAccessPaths(Unknown Source)
at org.apache.derby.impl.sql.compile.UnionNode.modifyAccessPaths(Unknown Source)
at org.apache.derby.impl.sql.compile.ResultSetNode.modifyAccessPaths(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.modifyAccessPaths(Unknown Source)
....

I think they're related, but am not able to conclude.

Am still trying to troubleshoot this.

Could be related to this? Filter_jdbc_static & Apache Derby

It could be, but I've already set the logstash Java heap to 6GB, and this is happening before I throw any data at it at all, it's still loading the table from MariaDB into Derby, near as I can tell. I've not even been able to start filebeat to test the pipeline, because it never gets going. It's not related to row count, because my source table has <400k rows.

Apparently, the Logstash heap doesn't apply to Derby. In my case, I was getting the StackOverflowError at just 4K+ rows. Following the thread, I upgraded Logstash to 6.3. Problem solved!

Hope you get your solution soon!

Thanks. I'm not sure how else you'd set the JVM heap size for a process ostensibly started from within logstash, if you're using a jvm options file for logstash (which I am) it starts off by telling you it's ignoring JAVA_OPTS if set in the environment. It's possible that Derby is somehow spawned with a fresh env, though, so I'll try a test and see if it makes any difference. Not sure I can solve this via upgrade, I'll be lucky to use 6.2.4 in my target environment.

Again, many thanks for looking at this with me.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.