Not sure if I've configured something wrong since this is my first time working with JDBC. I am trying to import a large database into Elasticsearch using Logstash and the logstash-input-jdbc plugin. Here is my pipeline:
input {
  jdbc {
    jdbc_driver_library => "/home/ubuntu/postgresql-42.1.4.jar"
    jdbc_driver_class => "org.postgresql.Driver"
    jdbc_connection_string => "jdbc:postgresql://myhost.com:5432/dbname?user=myuser&password=mypassword"
    jdbc_user => "myuser"
    jdbc_password => "mypassword"
#    schedule => "* * * * *"                                                                                                 
    statement => "SELECT * from tablename LIMIT 1"
    jdbc_paging_enabled => "true"
    #jdbc_fetch_size => "1000"                                                                                               
  }
}
output {
  stdout { codec => json_lines }
  #elasticsearch {                                                                                                           
  #  hosts => localhost                                                                                                      
  #  index => "db-import"                                                                                                   
  #}                                                                                                                         
}
I commented out a couple lines because I wasn't sure if they were causing issues or not. Right now when I run this and do journalctl -f I see:
Dec 01 15:01:59 elastisearch systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
Dec 01 15:01:59 elastisearch systemd[1]: Stopped logstash.
Dec 01 15:01:59 elastisearch systemd[1]: Started logstash.
Dec 01 15:02:35 elastisearch logstash[11884]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
and then the next log line is one row from the database. Then, it repeats: logstash has the Service hold-off time over, scheduling restart error, restarts, and imports the same row.
Why is it stuck in a loop? Shouldn't it just import the one row and then stop the job? And why is Logstash itself restarting? Does that imply there is a config error somewhere?
Thank you