One reason for this is because we track the sql_last_value - this is a value taken from the results. For each row read it is updated and so its eventual value will be the value of the tracking column for the last row.
The jdbc input has a few different modes of operation and each one is selected via a combination of various settings, we don't do a good job at separating these into distinct code paths, they are quite intermingled. This make adding a parallel execution feature very difficult without refactoring the code completely.
It is of little help right now but we have plans to move the code to be a Java plugin that will share a lot of common code with the jdbc_streaming and jdbc_static filters. No ETA on when though.
Thanks for your comments, I tried to solve in other way like below
instead of jdbc_page_size, i used only jdbc_fetch_size
instead of one jdbc input plugin , i added ten , each sql execution based on the last digit of my primary key RIGHT(ST_ID,1)=0 , RIGHT(ST_ID,1)=1 , RIGHT(ST_ID,1)=2 , etc
i maintain 10 different sql_last_value
this approch works for me, do you see any impact , since i am new to logstash.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.