Logstash 5.1 with JDBC input plug-in: Always Restarting

I tried to read the content of table from my Database. I faced the problem of java heap memory in logstash 2.x But It was working with a small resultset .. When I upgrade to 5.1 and using X-Pack.. Now I couldn't manage to read from database .. Please, Find blow my config file and the output.. Logstash is restarting itself .. I don't know what's wrong !!

Here is my config file:

input {

jdbc {
   type => "jdbc-demo"
   jdbc_driver_library => "/usr/share/java/mysql-connector-java-5.1.40-bin.jar"
   jdbc_driver_class => "com.mysql.jdbc.Driver"
   jdbc_connection_string => "jdbc:mysql://localhost:3306/statistics"
   jdbc_user => "username"
   jdbc_password => "password"
   statement => "SELECT * FROM stations limit 1"
   }

}

output {
elasticsearch {
hosts => [ "localhost:9200" ]
user => elastic
password => changeme
}
}

Here is the logs:

[2017-01-30T10:35:11,044][INFO ][logstash.inputs.jdbc ] (0.011000s) SELECT * FROM stations limit 1
[2017-01-30T10:35:11,182][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://~hidden~:~hidden~@localhost:9200"]}}
[2017-01-30T10:35:11,183][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x2aef5bf7 URL:http://~hidden~:~hidden~@localhost:9200>, :healthcheck_path=>"/"}
[2017-01-30T10:35:11,266][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x2aef5bf7 URL:http://~hidden~:~hidden~@localhost:9200>}
[2017-01-30T10:35:11,267][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-01-30T10:35:11,297][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-01-30T10:35:11,301][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]}
[2017-01-30T10:35:11,303][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>32, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>4000}
[2017-01-30T10:35:11,317][INFO ][logstash.pipeline ] Pipeline main started
[2017-01-30T10:35:11,383][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600
[2017-01-30T10:35:14,329][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
[2017-01-30T10:35:21,871][INFO ][logstash.inputs.jdbc ] (0.009000s) SELECT * FROM stations limit 1
[2017-01-30T10:35:22,021][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://~hidden~:~hidden~@localhost:9200"]}}
[2017-01-30T10:35:22,023][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x4713a90f URL:http://~hidden~:~hidden~@localhost:9200>, :healthcheck_path=>"/"}
[2017-01-30T10:35:22,107][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x4713a90f URL:http://~hidden~:~hidden~@localhost:9200>}
[2017-01-30T10:35:22,108][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-01-30T10:35:22,136][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-01-30T10:35:22,140][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]}
[2017-01-30T10:35:22,141][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>32, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>4000}
[2017-01-30T10:35:22,153][INFO ][logstash.pipeline ] Pipeline main started
[2017-01-30T10:35:22,224][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-01-30T10:35:25,163][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}

Here is the debugging errors:

[2017-01-30T11:49:24,332][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>32, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>4000}
[2017-01-30T11:49:24,343][INFO ][logstash.pipeline ] Pipeline main started
[2017-01-30T11:49:24,368][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-01-30T11:50:39,744][WARN ][logstash.runner ] SIGTERM received. Shutting down the agent.
[2017-01-30T11:50:39,744][ERROR][logstash.pipeline ] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. {"exception"=>java.lang.OutOfMemoryError: Java heap space, "backtrace"=>["org.jruby.runtime.ivars.StampedVariableAccessor.createTableUnsafe(StampedVariableAccessor.java:124)", "org.jruby.runtime.ivars.StampedVariableAccessor.setVariable(StampedVariableAccessor.java:97)", "org.jruby.runtime.ivars.VariableTableManager.setVariableInternal(VariableTableManager.java:158)", "org.jruby.runtime.ivars.VariableTableManager.setObjectId(VariableTableManager.java:561)", "org.jruby.runtime.ivars.VariableTableManager.initObjectId(VariableTableManager.java:547)", "org.jruby.runtime.ivars.VariableTableManager.getObjectId(VariableTableManager.java:127)", "org.jruby.RubyBasicObject.getObjectId(RubyBasicObject.java:1020)", "org.jruby.RubyBasicObject.id(RubyBasicObject.java:1008)", "org.jruby.Ruby.execRecursiveInternal(Ruby.java:4181)", "org.jruby.Ruby.execRecursiveOuter(Ruby.java:4257)", "org.jruby.RubyArray.hash19(RubyArray.java:689)", "org.jruby.RubyArray$INVOKER$i$0$0$hash19.call(RubyArray$INVOKER$i$0$0$hash19.gen)", "org.jruby.runtime.Helpers.invokedynamic(Helpers.java:2803)", "org.jruby.RubyObject.hashCode(RubyObject.java:506)", "com.concurrent_ruby.ext.jsr166e.ConcurrentHashMapV8.internalPutIfAbsent(ConcurrentHashMapV8.java:1472)", "com.concurrent_ruby.ext.jsr166e.ConcurrentHashMapV8.putIfAbsent(ConcurrentHashMapV8.java:2793)", "com.concurrent_ruby.ext.JRubyMapBackendLibrary$JRubyMapBackend.put_if_absent(JRubyMapBackendLibrary.java:129)", "com.concurrent_ruby.ext.JRubyMapBackendLibrary$JRubyMapBackend$INVOKER$i$2$0$put_if_absent.call(JRubyMapBackendLibrary$JRubyMapBackend$INVOKER$i$2$0$put_if_absent.gen)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:202)", "rubyjit.LogStash::Instrument::MetricStore$$fetch_or_store_41d8d722a50485bbf5fa725515e2d1ebf1a5e2af1028566121.file(/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:55)", "rubyjit.LogStash::Instrument::MetricStore$$fetch_or_store_41d8d722a50485bbf5fa725515e2d1ebf1a5e2af1028566121.file(/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb)", "org.jruby.ast.executable.AbstractScript.file(AbstractScript.java:46)", "org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:241)", "org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:211)", "org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:222)", "rubyjit.LogStash::Instrument::Collector$$push_b4714a382090ec331e46a649349f002cf522cdf41028566121.chained_0_rescue_1$RUBY$SYNTHETIC__file__(/usr/share/logstash/logstash-core/lib/logstash/instrument/collector.rb:41)", "rubyjit.LogStash::Instrument::Collector$$push_b4714a382090ec331e46a649349f002cf522cdf41028566121.file(/usr/share/logstash/logstash-core/lib/logstash/instrument/collector.rb:40)", "rubyjit.LogStash::Instrument::Collector$$push_b4714a382090ec331e46a649349f002cf522cdf41028566121.file(/usr/share/logstash/logstash-core/lib/logstash/instrument/collector.rb)", "org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:121)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:70)", "rubyjit.LogStash::Instrument::Metric$$increment_6fad7e5de47a973535c62d7baef11ff589e3a0c91028566121.file(/usr/share/logstash/logstash-core/lib/logstash/instrument/metric.rb:22)", "rubyjit.LogStash::Instrument::Metric$$increment_6fad7e5de47a973535c62d7baef11ff589e3a0c91028566121.file(/usr/share/logstash/logstash-core/lib/logstash/instrument/metric.rb)"]}
[2017-01-30T11:50:39,746][WARN ][logstash.inputs.jdbc ] Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::JavaLang::OutOfMemoryError: Java heap space>}
[2017-01-30T11:50:39,755][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.