Logstash jdbc input plugin schedule doesn't work

Hi everyone,

I am trying to run my input plugin at a specific time for example 11:30 AM, but the schedule doesn't work. It passes the time, but doesn't output any logs in the command line. Any idea what is the issue? And what is an appropriate fix?

Logstash configuration:

input{
  jdbc {
	jdbc_driver_library => "/usr/lib/AthenaJDBC42_2.0.7.jar"
    jdbc_driver_class => "com.simba.athena.jdbc.Driver"
    jdbc_connection_string => "***"
    jdbc_user => "athena"
    schedule => "30 11 * * *"
    statement => "select distinct customer_id, farm_id, phase_id, machine_id, format_datetime(from_unixtime(capture_timestamp), 'yyyy-MM-dd') as op_date from eis_data_lake.image"
    type => "Jdbc_Athena"
  }
}

output{
  if [type] == "Jdbc_Athena"{
    elasticsearch{
      hosts => ["${ES_HOST}"]
      user => "${USERNAME}"
      password => "${PASSWORD}"
      index => "test-athena"
      document_type => "athena"
    }
  }

  stdout { 
    codec => rubydebug {
      metadata => true
    }
  }
}

Command line output:

2019/07/08 18:05:49 Setting 'cloud.id' from environment.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-07-08T18:06:09,425][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-07-08T18:06:09,441][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-07-08T18:06:09,840][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.1"}
[2019-07-08T18:06:09,865][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"b52c0942-a6d2-4f1a-a14d-6dfec3def9d2", :path=>"/usr/share/logstash/data/uuid"}
[2019-07-08T18:06:16,518][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"test-athena", password=><password>, id=>"b65fa4534df6208c620ad7a2adf0f5c9ff63056ab76a6f12cbc428a86d2347e3", user=>"elastic", hosts=>[https://c41c137ba4fb498b8d8ca13ddcb77867.us-west-2.aws.found.io:9243], document_type=>"athena", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_f15ce7fe-4ec3-4ab0-999c-dab16ef5e474", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-08T18:06:17,494][INFO ][org.logstash.ackedqueue.QueueUpgrade] No PQ version file found, upgrading to PQ v2.
[2019-07-08T18:06:17,997][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@c41c137ba4fb498b8d8ca13ddcb77867.us-west-2.aws.found.io:9243/]}}
[2019-07-08T18:06:18,414][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@c41c137ba4fb498b8d8ca13ddcb77867.us-west-2.aws.found.io:9243/"}
[2019-07-08T18:06:18,585][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-07-08T18:06:18,587][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-08T18:06:18,607][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://c41c137ba4fb498b8d8ca13ddcb77867.us-west-2.aws.found.io:9243"]}
[2019-07-08T18:06:18,612][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-07-08T18:06:18,653][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"pipeline1", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x44332a8a run>"}
[2019-07-08T18:06:18,727][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-07-08T18:06:18,802][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"pipeline1"}
[2019-07-08T18:06:18,873][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:pipeline1], :non_running_pipelines=>[]}
[2019-07-08T18:06:19,220][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated

Anyone knows why the schedule doesn't work?

As per your Schedule your sql statement will work on 11:30 AM UTC there is no doubt about that.
set your Schedule to "* * * * *"(every minute) to check if it is working at all

1 Like

Every minute works, but not a specific time.

Are you expecting the time to be in your local timezone, or in UTC?

Oh I actually didn't think about that, I expect it to be local.

I think the scheduler uses UTC. Easier for you to verify that than for me.

The rufus documentation has an example of supplying the timezone in the cronline.

1 Like

Yes, it was UTC! I guess I'll just work with UTC rather than local then. Thanks for your suggestion!

You may not have noticed that I edited my post to link to the documentation that shows how to supply the timezone in the schedule :slight_smile:

yes, that because of UTC. I have mentioned that

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.