Issue with Logstash & JDBC input (postgreSQL)

Hi, I use Logstash to get data from PostgreSQL and then send it to Elasticsearch. I use JDBC Driver. The issue is no index created in ES after I run Logstash.

Here is my Logstash config.

input {
	jdbc {
        jdbc_driver_library => "D:/postgresql-42.2.5.jar"
        jdbc_driver_class => "org.postgresql.Driver"
        jdbc_connection_string => "jdbc:postgresql://127.0.0.1:57610/mydb"
        jdbc_user => "myuser"
        jdbc_password => "mypw"
        statement => "select * from mytable"
    }
}


output {

    elasticsearch {
        hosts => ["localhost:9200"]
        index => "logstashDB-%{+YYYY.MM.dd}"
    }
    stdout { codec => rubydebug }
}

Here is my Logstash logs.

[2019-03-09T10:51:18,139][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-03-09T10:51:18,412][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.2"}
[2019-03-09T10:51:37,547][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-03-09T10:51:39,215][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-03-09T10:51:47,662][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-03-09T10:51:47,917][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-03-09T10:51:47,927][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-03-09T10:51:48,114][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-03-09T10:51:48,141][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-03-09T10:51:48,314][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-03-09T10:51:49,465][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x56144a9d run>"}
[2019-03-09T10:51:49,905][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-03-09T10:51:51,819][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Thanks for your help.

The logs seem to indicate that a logstash pipeline is starting up okay, but don't contain any references to your JDBC input. I'd recommend trying again, adding --log.level debug flag to include debug output, which may help us see what is going on.

Hi, I added the flag and here's the output (only from the part where it says that Logstash has started successfully, everything before it seems okay).

[2019-03-10T09:32:33,342][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-03-10T09:32:33,559][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-03-10T09:32:33,562][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-03-10T09:32:34,923][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["D:/logstash-6.5.2/CONTRIBUTORS", "D:/logstash-6.5.2/Gemfile", "D:/logstash-6.5.2/Gemfile.lock", "D:/logstash-6.5.2/LICENSE.txt", "D:/logstash-6.5.2/NOTICE.TXT", "D:/logstash-6.5.2/[localhost", "D:/logstash-6.5.2/bin", "D:/logstash-6.5.2/config", "D:/logstash-6.5.2/data", "D:/logstash-6.5.2/first-pipeline.conf", "D:/logstash-6.5.2/lib", "D:/logstash-6.5.2/logs", "D:/logstash-6.5.2/logstash-core", "D:/logstash-6.5.2/logstash-core-plugin-api", "D:/logstash-6.5.2/logstash-simple.conf", "D:/logstash-6.5.2/modules", "D:/logstash-6.5.2/rubydebug", "D:/logstash-6.5.2/simple1.conf", "D:/logstash-6.5.2/tools", "D:/logstash-6.5.2/vendor", "D:/logstash-6.5.2/x-pack"]}
[2019-03-10T09:32:34,929][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"D:/logstash-6.5.2/first-pipeline2.conf"}
[2019-03-10T09:32:34,976][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2019-03-10T09:32:36,545][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5edb7082 sleep>"}
[2019-03-10T09:32:37,870][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["D:/logstash-6.5.2/CONTRIBUTORS", "D:/logstash-6.5.2/Gemfile", "D:/logstash-6.5.2/Gemfile.lock", "D:/logstash-6.5.2/LICENSE.txt", "D:/logstash-6.5.2/NOTICE.TXT", "D:/logstash-6.5.2/[localhost", "D:/logstash-6.5.2/bin", "D:/logstash-6.5.2/config", "D:/logstash-6.5.2/data", "D:/logstash-6.5.2/first-pipeline.conf", "D:/logstash-6.5.2/lib", "D:/logstash-6.5.2/logs", "D:/logstash-6.5.2/logstash-core", "D:/logstash-6.5.2/logstash-core-plugin-api", "D:/logstash-6.5.2/logstash-simple.conf", "D:/logstash-6.5.2/modules", "D:/logstash-6.5.2/rubydebug", "D:/logstash-6.5.2/simple1.conf", "D:/logstash-6.5.2/tools", "D:/logstash-6.5.2/vendor", "D:/logstash-6.5.2/x-pack"]}
[2019-03-10T09:32:37,873][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"D:/logstash-6.5.2/first-pipeline2.conf"}
[2019-03-10T09:32:37,885][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2019-03-10T09:32:38,104][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2019-03-10T09:32:38,576][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-03-10T09:32:38,577][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-03-10T09:32:40,863][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["D:/logstash-6.5.2/CONTRIBUTORS", "D:/logstash-6.5.2/Gemfile", "D:/logstash-6.5.2/Gemfile.lock", "D:/logstash-6.5.2/LICENSE.txt", "D:/logstash-6.5.2/NOTICE.TXT", "D:/logstash-6.5.2/[localhost", "D:/logstash-6.5.2/bin", "D:/logstash-6.5.2/config", "D:/logstash-6.5.2/data", "D:/logstash-6.5.2/first-pipeline.conf", "D:/logstash-6.5.2/lib", "D:/logstash-6.5.2/logs", "D:/logstash-6.5.2/logstash-core", "D:/logstash-6.5.2/logstash-core-plugin-api", "D:/logstash-6.5.2/logstash-simple.conf", "D:/logstash-6.5.2/modules", "D:/logstash-6.5.2/rubydebug", "D:/logstash-6.5.2/simple1.conf", "D:/logstash-6.5.2/tools", "D:/logstash-6.5.2/vendor", "D:/logstash-6.5.2/x-pack"]}
[2019-03-10T09:32:40,866][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"D:/logstash-6.5.2/first-pipeline2.conf"}

I tried running another Logstash instance with CSV file input. The logs showed the same as what I pasted above, but it still created an index in Elasticsearch. So I think the issue is the JDBC input. What should I do/check to find the problem?
Any guidance would be appreciated.

Hello chaire,

I have the impression that your statement is not executed. Add a little schedule to restart its execution.

schedule => "* * * * *"

You can run logstash with this command to track the behavior in real time.

/usr/share/logstash/ bin/logstash -f /path_to/logstash_configFile.conf

Hi hermann, thanks for the response.
I've figured out the problem, it's on the jdbc connection string config, turned out I put the wrong port. Only that but I was so confused because the logs didn't give me any clue about that (no error message at all). It's solved now. Thank you.