Output jdbc data inserts

Hello, my input is a file in s3 and I am making insections to an amazon aurora postgresq database.

The time is not good to insert more than 1 million records, it takes 1 hour and a half.

Help me

pipeline:
batch:
size: 1000
delay: 50
pipeline.workers: 2
pipeline.id: persisted-queue-pipeline
pipeline.ordered: false
queue.type: persisted
queue.max_bytes: 2000mb
queue.drain: true

input {
s3 {
bucket => "bucket"
prefix => "c5/"
additional_settings => {
force_path_style => true
follow_redirects => false
}
watch_for_new_files => false
delete => false
tags => ["load_file"]
}
}
filter {
csv {
separator => ";"
columns => [""]
skip_header => "true"
}

output {

            jdbc {
                  driver_jar_path => "/usr/share/logstash/lib/jars/postgresql-42.3.1.jar"
                  driver_class => "org.postgresql.Driver"
                  connection_string => "jdbc:postgresql:/.rds.amazonaws.com:5432-aurora-database"
                  username => ""
                  password => ""
                 #max_pool_size =>  "20"
                  #flush_size =>  "4000"
                  statement => ["INSERT INTO public.table(
                                campo1,campo2,
                                campo4, campo, campo3, 
                                campo5,campo6, 
                                campo7,campo8,checksum,campo9) 
                                
                                VALUES (CAST(? AS UUID),CAST(? AS NUMERIC),?, ?, ?, ?, ?, ?,?,?,?)",
                                "uuid","[target_parametric][0][campo]","campo1","campo2","campo3","campo4","campo5","campo6","campo7","checksum","campo8"]
            }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.