I'm not a jdbc user, so there is some speculation embedded here, but I will offer it as a line of investigation for you.
Your basic problem here appears to be that each time you execute the query you are fetching 9.4 million rows from the table and then throwing away almost all of them away. Then you are doing that again for the next 5000 rows. That takes time.
Any solution that involves a huge OFFSET value will be slow -- paging will not work well with this result set.
If 'id' is a key with which you can order all the rows and use as the tracking column, then you may be able to optimize this.
Schedule the input to run once per second. That means that the input will run the query to fetch a bunch of rows, then quickly start the next iteration.
I don't speak SQL, so I don't know how to write the query, but in English what you want is to do a select with 'ORDER BY id' from the entire table, and from that select 'id > :sql_last_value' with 'LIMIT 5000'. This effectively implements paging in a different way.
Provided that the index fits in the buffer cache that should perform reasonably well. If the index does not fit it will thrash the cache and performance will go to hell in a handbasket.