We use logstash 5.2.2 on CentOS 7 to extract data from an Oracle database to put into Elasticsearch (5.2.2). I've noticed that if there is a failure (username/password etc.) on the Oracle end logstash still returns a zero return code (success). The issue is it goes unnoticed that it's failing, not sure if this is a bug or not, but ideally it would return an error code so it can be handled in my scripts. Is there anyway to force this?
I don't understand where you see the success flag and how you plan to handle it in your scripts. Any connection errors are extensively logged with retry.
This captures the exit status of the previously run command (in my case, logstash). It's used further down the in the script (not included).
The issue is, I've no way of knowing it failed other than manually checking the logs. Retrying is great, but what if it's something like an invalid password?
Ahh - I see.
Logstash at the level you are looking at is really just a Thread Manager. Each Input runs in its own thread as does a certain number of worker threads for the filters & outputs. If an input thread does not throw an error then Logstash will not know about any runtime problems in an input. Some inputs do validate things like connections at start-up while others don't. It is up to the plugin author to decide this. In the case of the jdbc input - it is designed to be run periodically in a cron-like scheduler so it assumes that the values given by the config are good and that transient connection failures over time are retried at the next scheduled run. Granted this is not ideal for the unscheduled use case as yours is. This would need to be coded for (the jdbc input is due for a major shake up as it happens, more modular etc.).
That said and as you are adept at scripting, a possible work-around that is not dependent on LS internals would be to have an exec output delete a "marker" file that you create in your start script. The exec output would only run when it gets events - i.e. if the jdbc input was successful in its attempt to fetch from Oracle.
I have not tested the above suggestion btw.
P.S. this would also tell you that there were no records in the query result (if it did connect successfully).
Thank you for the response, makes perfect sense. Unfortunately, the vast majority of the queries return zero rows so the suggested workaround wouldn't quite do it for me.
Ultimately, I think I need to redesign how I import the data. I retro-fitted Logstash into my script as it was using the (now rather quiet) JDBC Importer, and Logstash was designed differently it seems.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.