Issue with monitoring Oracle Databases with Logstash


I'm having some issues with Logstash and monitoring a single Oracle database. Logstash takes 30 seconds to
"warm up" before generating data, for a further 45 seconds before promptly imploding. Error that I've captured is:

Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>"Cannot serialize instance of: Sequel::SQL::Blob",

I'm wondering if there are any logstash guru's who can shine some light on this particular issue?


1 Like

Can you paste your entire config? What version are you on? What OS?


Thanks so much for your assistance on this.

Configuration is as follows:

Host OS - OEL 6.7
JDK - 8 update 92
Logstash Version - 2.3.2

Logstash Config:
input {
jdbc {
jdbc_validate_connection => true
jdbc_connection_string => ""
jdbc_user => "
jdbc_password => ""
jdbc_driver_library => "/opt/ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
last_run_metadata_path => "/tmp/logstash-oradb.lastrun"
record_last_run => true
schedule => "
/6 * * * *"
filter {
# Set the timestamp to that of the ASH sample, not current time.
mutate { convert => [ "sample_time" , "string" ]}
date { match => ["sample_time", "ISO8601"]}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "oracle-"
hosts => "localhost"}

Let me know if you need anything further.

Just bumping this post to the top, in the hope that a logstash expert can shine some more light on this particular issue.

check the incoming data is possible there is a "break" line , I had a binary (20) in a single column . I just decided to check out the insert in other tables and everything worked

Is there any update on this? My org has ran into the same error on 2.3 but do not run into this error or 2.0.

Just seeing if you were able to fix this. We only noticed the issue when connecting to Oracle DB.


Not from my point of view. I've alpha 4 seems to last longer, but still implodes.

I believe there are 2 issues. Firstly, it's to do with my proof of concept architecture I had. Haven't had the time to revisit my updated architecture, with 1 logstash instance as a message forwarder into kafka into a second logstash instance (worker) in elastic. Hope to revisit this in the coming week or two.

Secondly, it's the sheer volume in question, which gives my first issue slightly more credit. I've been thinking about what table, with limited diagnostic information I could use then scale up from there.

Does that make sense? Or have you built it out are still having this issue?

I am having the same issue. Perhaps that's because some of my db columns contain JSON objects.

Is there a solution for this ?

Facing the same issue in logstash 2.4.0

LogStash::Json::GeneratorError: Cannot serialize instance of: Sequel::SQL::Blob
jruby_dump at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/json.rb:53
to_json at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-event-2.4.0-java/lib/logstash/event.rb:145
encode at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-json_lines-2.1.3/lib/logstash/codecs/json_lines.rb:48
receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-stdout-2.0.6/lib/logstash/outputs/stdout.rb:55
multi_receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/outputs/base.rb:109
each at org/jruby/
multi_receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/outputs/base.rb:109
worker_multi_receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/output_delegator.rb:130
multi_receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/output_delegator.rb:114
output_batch at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:301
each at org/jruby/
output_batch at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:301
worker_loop at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:232
start_workers at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:201

Any assistance would be greatly appreciated