Hi there
We have 25 logstash pipelines utilizing JDBC input plugin with schedule.
input {
jdbc {
...
schedule => "30 * * * * *"
...
}
}
Each pipeline has its own value instead of 30, so that prepared statements don't execute at the same time.
The pipelines may execute without any issues for a long time, however sometimes some random pipeline out of those 25 suddenly stops its prepared statement execution.
For instance, we see the following logs for some time:
2022-09-29T14:00:00.000Z (0.2s) EXECUTE prepared-statement-1; [2022-09-29 12:00:20 UTC]
2022-09-29T14:01:00.000Z (0.2s) EXECUTE prepared-statement-1; [2022-09-29 12:00:20 UTC]
2022-09-29T14:02:00.000Z (0.2s) EXECUTE prepared-statement-1; [2022-09-29 12:00:20 UTC]
and at some point the EXECUTE logs don't show up anymore.
The pipeline doesn't recover and we need to re-start logstash instance in order to fix inconsistency between our DB and elasticsearch. Sometimes pipeline stays in the invalid state for a day or two when there are no new records in DB and there are no inconsistencies.
Recently we've updated our Logstash docker image to the latest version available - 8.4.2, however the issue still persists without any errors in logs.
I've found a related github issue and already posted the details above there, however I haven't received any response back in 3 weeks:
crond job successful execute a few times, but stuck without any error log later
Have anybody faced the same issue and managed to resolve it? Or do you have any ideas how to fix this?