Thank you very much for your time!
I have CPU with 4 cores. So best would be to set:
jdbc_fetch_size` 500,
jdbc_page_size` 500
pipeline.batch.size: 500
pipeline.workers: 4
?
But this will result in 200 DB queries(!) to complete 100 000 rows (this 100 000 is set for good or you mean since there was no jdbc_page_size then default was 100 000?).
"Pipeline batch" is the no of rows/docs being treated? Logstash is converting data into ES format before it sends it to ES? On the screen I see raw DB rows but since it takes so long time I think logstash is doing something with it...
I think this is logstash problem. In Java monitor I see ES is sleeping.
How to threat xeon threads? Still we talk about no of cores not threads?
Regards