our team is using logstash to sync data between MySQL and ElasticSearch, I am following How to keep Elasticsearch synchronized with a relational database using Logstash and JDBC to set up my pipeline. when the logstash is running on my local server through docker-compose, the pipeline is working well. but when deploying into the k8s, the schedule is not right. I set the schedule every 10 seconds, but on k8s, the schedule is running every 1 minute.
the logstash image : docker.elastic.co/logstash/logstash:7.9.3
docker version: 19.03.12
minikube version: v1.17.1
my pipeline config file:
input {
jdbc {
jdbc_driver_library => "/opt/mysql-connector-java-8.0.16.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://${JDBC_HOSTNAME}:3306/${DB_NAME}?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true&zeroDateTimeBehavior=convertToNull"
jdbc_user => "${JDBC_USER}"
jdbc_password => "${JDBC_PASSWORD}"
jdbc_paging_enabled => true
tracking_column => "unix_ts_in_secs"
use_column_value => true
tracking_column_type => "numeric"
schedule => "*/10 * * * * *"
statement => "SELECT *, UNIX_TIMESTAMP(check_updated_time) AS unix_ts_in_secs FROM my_table WHERE (UNIX_TIMESTAMP(check_updated_time)) > :sql_last_value AND check_updated_time < NOW() ORDER BY check_updated_time desc"
}
}
filter {
mutate {
copy => { "summary_id" => "[@metadata][_id]"}
remove_field => ["summary_id","@version", "unix_ts_in_secs"]
}
}
output {
# stdout { codec => "rubydebug"}
elasticsearch {
index => "my_es_idx"
document_id => "%{[@metadata][_id]}"
doc_as_upsert => true
action => "update"
hosts => "http://es-master:9200"
}
}
the schedule I am setting every 10 seconds, this is working when logstash is running on local server through docker-compose, but when deployed on k8s (both minikube and rancher i have tested ), the schedule is runing every 1 minute, not every 10 seconds.
please help confirm the JDBC schedule have some limitation on k8s?