Need to use bind variables for Oracle query in jdbc-input (using sql_last_value) in Logstash

As per the jdbc-input-plugin documentation, we're using the 'sql_last_value' to only load data since the query was last run


input {
    statement => "SELECT * from schema.table WHERE updated_date >= :sql_last_value"
    last_run_metadata_path => "/var/logstash/filename.lastrun"
    use_column_value => "false"
    tracking_column => "updated_date"
    tracking_column_type => "timestamp"

However, each time the query is run, a new execution plan is created by Oracle, rather than using an existing plan.

As this query is scheduled to run every 2 minutes, this means Oracle is wasting valuable resources and we've been told that this behaviour cannot be allowed in Production.

Is there anyway that we can use 'bind variables' with the jdbc-input-plugin, to utilise PreparedStatement functionality, reducing the number of execution plans to one?
See Bind variables - The key to application performance

This issue was raised before, but seems to have been closed and I cannot get the solution suggested to work:

There has been an issue open on this for over a year.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.

Even though this is closed I will follow up for future readers expecting that Prepared Statements should be the norm.

Sequel being an ORM is mostly used to define DB interactions using objects that generate the SQL dialect statements that are then submitted for execution. For example:

DB[:mytable].select(:id, :name, :age).where(Sequel.lit('age > ?', 21)) # SELECT id, name, age FROM mytable WHERE age > 21
DB[:mytable].first(2) # SELECT * FROM mytable LIMIT 2
DB[:mytable].count(:id) # SELECT count(id) AS count FROM mytable LIMIT 1

This is usable if you are writing a web application that has a db schema design that the application itself enforces - the ORM bit because you will map a field on a html form to a column in a table or a value displayed in a web page to a column from a record in a catalogue etc.

The Sequel docs show how to use prepared statements thus:

ps = DB[:mytable].where(id: :$ident).prepare(:first, :select_by_id)

Now I don't yet know the implications of the "type" as set with :first (maybe we would use :select) and I presume to use the greater than N literal we would use:

ps = DB[:mytable].where(Sequel.lit('id > ?', :$ident)).prepare(:select, :select_next)

The example given in Sequel is shown using the SQL generation via the ORM. It is not easy to see whether prepare can be called on a raw statement without detailed reading of the Sequel code. In addition, the prepared statements are stored on the server and executed by name repeatedly, if multiple jdbc inputs are specified in the config then a "constant" name in the plugin code will cause overwrites, a user defined prepared statement name will need to be specified (each plugin can have an id but this is not often specified and if not a random one is generated, there would be a net buildup of unused prepared statements in the DB if the same statement with a new name was submitted on each LS restart).

In ETL/BI tools it is quite common to allow users to specify a SQL statement execute. This is useful because it is a kind of WYSIWYG mechanism - using some kind of interface to the DB one can build the statement until it yields the correct results or be given one by a DBA. Ironically, the statements that most benefit from being prepared are the most complex ones.

Lastly, a ray of hope, it may be possible to build the prepared statement using the raw SQL statement as supplied by the user, I have not tried it but I suspect we will be moving into a different (underused) part of the Sequel code base. I can do some experiments but the big challenge is to get confidence because the basic tests in the plugin don't really verify the more complex statements out in the wild. I can only really live test on Postgres.