Hi All,
I am trying to run logstash in a multi-pod setup. My sample config is :
input {
jdbc {
jdbc_driver_library => "${HOME}/postgres/postgresql-42.3.2.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql://${DB_HOST}:5432/${DB_NAME}"
jdbc_user => "${DB_USER}"
jdbc_password_filepath => "/mount-path/os_db_pass"
use_column_value => true
tracking_column => "created_on"
tracking_column_type => "timestamp"
schedule => "*/5 * * * * *"
last_run_metadata_path => "${HOME}/jdbc_meta/.logstash_jdbc_last_run"
statement => "SELECT * FROM os_schema.os_table WHERE (created_on > :sql_last_value) ORDER BY created_on ASC LIMIT 5"
}
}
filter {
mutate {
remove_field => ["@version"]
}
}
output {
opensearch {
hosts => ["${OS_HOST}:${OS_PORT}"]
index => "${OS_INDEX}"
user => "${OS_USER}"
password => "${OS_PASS}"
document_id => "%{id}"
ssl => false
ssl_certificate_verification => false
}
}
I am running in multiple pods so that when there is a large data set two pods can share the load and read the same last_run_metadata
file.
So does this JDBC plugin support sharing of metadata files across pods?
If yes then how can I achieve this?
At present with the shared configuration last_run_metdata_file
is overridden by each pod everytime that batch of records being picked up from DB.