Logstash Jdbc Input Plugin format for timestamp in prepared statement

We are looking to use prepared statements with the Jdbc input plugin, where the source database is a Db2 instance. So far we've not been using prepared statements, so that the relevant part of the (working) config looks so

input
{
    jdbc
    {
        statement =>                "SELECT * from my_table where timestamp_field>:sql_last_value order by timestamp_field ASC"
        use_column_value =>         true
	    tracking_column =>          timestamp_field 
        tracking_column_type =>     "timestamp"
        last_run_metadata_path =>   "/my_jdbc_last_run"
    }
}

Converted to prepared statements like so

input
{
    jdbc
    {
        statement =>                "SELECT * from my_table where timestamp_field> ? order by timestamp_field ASC"
        prepared_statement_bind_values => [ ":sql_last_value" ]
        prepared_statement_name =>  "get_my_data"
        use_prepared_statements =>  true
        use_column_value =>         true
        tracking_column =>          inststl
        tracking_column_type =>     "timestamp"
        last_run_metadata_path =>   "/prep_jdbc_last_run"
    }
}

however leads to problems, as the insertion for ? features a timestamp with time zone such as 1970-01-01 00:00:00 UTC. Unfortunately, the field in the table is just a timestamp, no attached time zone, so the Db2 instance is unable to resolve it out of the box. We are still looking if we can adjust the statement to resolve the time zone, but if logstash were not to send it in it the first place we'd be happy too.

We have tried setting values for plugin_timezone and jdbc_default_timezone in hope that logstash would resolve the delta amiably, but the time zone is still shown in the passed value.

I have fought this issue for long time when I was trying to understand this.

Basically my database also had same thing, timestamp statusachieved (date) field with local time and no timezone.

Elasticsearch will automatically convert that to UTC. Let it do it as it is good thing. I was thinking why the hell it changes my field. I don't want them to change it. But go google around and you will find it that it is good thing.

There are two way for this field to use it.

  1. kibana: it will convert that UTC to your local timezone (depending on your setting). default is where this browser it located. it can be anywhere but it will covert + - to local timezone.
  2. query or sql: elasticsearch it will not covert that timezone to local and it will display UTC timezone.

What will happen here is that
visulization will show event on correct time when it happened.
sql/kql query event will display on what ever way it is saved in your disk.

what I have done is created another date field. and two index template. one with as is. one with timezone date field.

now on kibana use field as is.
sql/kql use timezone date field.

You have to read up some more in order to understand this. hope I am making some sense here. :slight_smile:

  #create new field same as db date field, statusachieved is coming from database
  # 07/15/2020 05:00:00  will become 10:00:00 (for me as I am in CST)
   mutate{   add_field => { "statusachieved_timezone" => "%{statusachieved}" }   }

   # next  date field let ELK handle everything, we will use this for kibana
   date {
      match => ["statusachieved", "dd-MMM-yy HH:mm", "ISO8601"]
      target => "statusachieved"
   }

   ## lets set to UTC, will use this for sql/kql query
   # this will stay as is - 05:00:000, because I am telling elk that this is UTC time.
   date { match => ["statusachieved_timezone", "dd-MMM-yy HH:mm", "ISO8601"]
        timezone => "Etc/UTC"
        target => "statusachieved_timezone"
   }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.