Schedule keyword not generating index in ES via JDBC-input plugin

Hello All,

Please help me solve this i need to deliver this to my manager.

I am trying to connect to the database using Logstash-jdbc-input plugin.
To keep the data in Kibana up to date with the table feeding it I have used the schedule keyword with "* * * * * " as value stating it will run after every minute.
After running the logstash.conf file i have checked the logs in logstash and have noticed that the command runs fine after every minute but does not appear in the ES indices list nor in Kibana.

The conf file that I am running using is :-

input {
    jdbc {
        jdbc_connection_string => "jdbc:oracle:thin:@usual_stuff:1111/service_id"
        jdbc_user => "my username"
	jdbc_password => "mypassword"
        jdbc_driver_library => "/home/mypath/ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
	#jdbc_validate_connection => true
	schedule => "* * * * *"
        statement => "select * from mytable"
    }

}
output {
    stdout { codec => json_lines }
	elasticsearch {
	index => "aaaaaa"
	hosts => "http://10.11.221.99:9201"
	document_type => "schedulestry"
	}
}

The logs are:-

[2018-10-19T13:37:24,244][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/apps/tomcat/elk/ELK/logstash-6.2.4/modules/fb_apache/configuration"}
[2018-10-19T13:37:24,264][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/apps/tomcat/elk/ELK/logstash-6.2.4/modules/netflow/configuration"}
[2018-10-19T13:37:24,823][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-19T13:37:25,551][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-10-19T13:37:26,071][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}
[2018-10-19T13:37:28,655][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"aaaaaa", hosts=>[http://10.23.213.99:9201], document_type=>"schedulestry", id=>"d770d7b40bf389e18c9fbf3252b96c0db1929be0dcfbd80b806ca6b2ea4cd6a6", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_20585914-68e5-4267-9521-41c6725e4db6", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-10-19T13:37:28,760][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5}
[2018-10-19T13:37:29,319][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.23.213.99:9201/]}}
[2018-10-19T13:37:29,332][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.23.213.99:9201/, :path=>"/"}
[2018-10-19T13:37:29,542][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.23.213.99:9201/"}
[2018-10-19T13:37:29,631][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-10-19T13:37:29,636][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-10-19T13:37:29,652][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-10-19T13:37:29,672][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-19T13:37:29,725][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.23.213.99:9201"]}
[2018-10-19T13:37:29,977][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xee300ed sleep>"}
[2018-10-19T13:37:30,092][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-10-19T13:38:02,742][INFO ][logstash.inputs.jdbc     ] (0.071772s) select * from mytable
[2018-10-19T13:39:00,229][INFO ][logstash.inputs.jdbc     ] (0.002199s) select * from mytable
[2018-10-19T13:40:00,322][INFO ][logstash.inputs.jdbc     ] (0.001482s) select * from mytable

I saw old logs where schedule tag was not used and in there pipline was closed before index start to appear in ES. Why not in this case?
Kindly help.

Regards,
Jatin

Hi the error is resolved by itself. I just checked the index list in kibana after the weekend and I found the index present there.

But i have some other issue.
When I insert new records in my SQL table, the scheduler(which runs the select query after every minute) should sync the index as well. But i do not see those records in the Discover tab of Kibana.

Please tell me how can I make the data in Kibana to be in sync with the data in the SQL table. So far i was thinking that Logstash's input-JDBC plugin with scheduler tag should have helped updated by itself.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.