Logstash returns 0 (success) on JDBC error

Hi,

We use logstash 5.2.2 on CentOS 7 to extract data from an Oracle database to put into Elasticsearch (5.2.2). I've noticed that if there is a failure (username/password etc.) on the Oracle end logstash still returns a zero return code (success). The issue is it goes unnoticed that it's failing, not sure if this is a bug or not, but ideally it would return an error code so it can be handled in my scripts. Is there anyway to force this?

My conf file is created dynamically like so:

cat <<EOF > ${TMP_CONF}
input {
  jdbc {
    jdbc_driver_library => "/etc/elasticsearch-jdbc/lib/ojdbc7.jar"
    jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
    jdbc_connection_string => "${DB_STRING}"
    jdbc_user => "${DB_USER}"
    jdbc_password => "${DB_PASSWORD}"
    jdbc_fetch_size => 1000
    statement_filepath => "${ES_SQL}"
    parameters => { "update_date" => "${LAST_UPDATE}" }
    lowercase_column_names => false
  }
}
${FILTER_JSON}
output {
  elasticsearch {
    hosts => ${HOST_STRING}
    index => "${ES_INDEX}"
    document_id => "%{${ES_DOC_ID}}"
    document_type => "${INDEX_TYPE}"
    flush_size => 1000
    http_compression => true
  }
}
EOF

echo "Starting update: `${DATE}`" | tee -a ${TMP_FILE}

${LOGSTASH} -f ${TMP_CONF} --path.settings /etc/logstash

RET_CODE="${?}"

if [ "${RET_CODE}" -ne 0 ]; then
  echo "Error updating index, return code: ${RET_CODE}" | tee -a ${TMP_FILE}
  send_email 1 "Error updating index, return code: ${RET_CODE}"
fi

Any help greatly appreciated!

Cheers,
Dave

I don't understand where you see the success flag and how you plan to handle it in your scripts. Any connection errors are extensively logged with retry.

Thanks for the response!

RET_CODE="${?}"

This captures the exit status of the previously run command (in my case, logstash). It's used further down the in the script (not included).

The issue is, I've no way of knowing it failed other than manually checking the logs. Retrying is great, but what if it's something like an invalid password?

Ahh - I see.
Logstash at the level you are looking at is really just a Thread Manager. Each Input runs in its own thread as does a certain number of worker threads for the filters & outputs. If an input thread does not throw an error then Logstash will not know about any runtime problems in an input. Some inputs do validate things like connections at start-up while others don't. It is up to the plugin author to decide this. In the case of the jdbc input - it is designed to be run periodically in a cron-like scheduler so it assumes that the values given by the config are good and that transient connection failures over time are retried at the next scheduled run. Granted this is not ideal for the unscheduled use case as yours is. This would need to be coded for (the jdbc input is due for a major shake up as it happens, more modular etc.).

That said and as you are adept at scripting, a possible work-around that is not dependent on LS internals would be to have an exec output delete a "marker" file that you create in your start script. The exec output would only run when it gets events - i.e. if the jdbc input was successful in its attempt to fetch from Oracle.

I have not tested the above suggestion btw.

P.S. this would also tell you that there were no records in the query result (if it did connect successfully).

Thank you for the response, makes perfect sense. Unfortunately, the vast majority of the queries return zero rows so the suggested workaround wouldn't quite do it for me.

Ultimately, I think I need to redesign how I import the data. I retro-fitted Logstash into my script as it was using the (now rather quiet) JDBC Importer, and Logstash was designed differently it seems.

Thanks again for your time!
Dave

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.