I'd like to export tom_test2 postgresql table to elastic search. The table has 176805 rows:
=> select count(*) from tom_test2;
count
--------
176805
(1 row)
The following logstach conf file import correctly my data to elastic search:
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "xxx"
# The user we wish to execute our statement as
jdbc_user => "xxx"
jdbc_password => "xxx"
# The path to our downloaded jdbc driver
jdbc_driver_library => "xxx"
# The name of the driver class for Postgresql
jdbc_driver_class => "org.postgresql.Driver"
# our query
statement => "select * from tom_test2"
}
}
output {
elasticsearch {
hosts => ["xxx"]
index => "tom"
document_type => "tom_test"
}
}
In elastic search:
GET tom/tom_test/_search
"hits": {
"total": 176805,
"max_score": 1
}
I'm deleting my index in elastic search:
delete tom
And I now would like to do the same operation using jdbc_page_size in case my data becomes bigger, my logstach conf file is now:
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "xxx"
# The user we wish to execute our statement as
jdbc_user => "xxx"
jdbc_password => "xxx"
# The path to our downloaded jdbc driver
jdbc_driver_library => "xxx"
# The name of the driver class for Postgresql
jdbc_driver_class => "org.postgresql.Driver"
# our query
statement => "select * from tom_test2"
jdbc_page_size => 1000
jdbc_paging_enabled => true
}
}
output {
elasticsearch {
hosts => ["xxx"]
index => "tom"
document_type => "tom_test"
}
}
My count is now wrong:
GET tom/tom_test/_search
"hits": {
"total": 106174,
"max_score": 1,
}
as 176805-106174=70631 rows are missing