An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"\"\\xB0\" from ASCII-8BIT to UTF-8"

I am trying to query v$active_session_history table from logstash with below config file and I bump into the error due to column XID of RAW Type - I am also using container to set up my ELK configuration

I am trying to replicate this set up https://www.elastic.co/blog/visualising-oracle-performance-data-with-the-elastic-stack

The XID column is defined as RAW type in it's description - RAW type converts to byte type while using JDBC - https://docs.oracle.com/cd/B19306_01/java.102/b14188/datamap.htm

I am pretty sure I need to convert this particular column to a datatype elastic search understand

But I am not sure how it is done - Appreciate help

input {
jdbc {
jdbc_validate_connection => true
jdbc_connection_string => "jdbc:oracle:thin:@//16.16.16.16:1621/DB"
jdbc_user => "username"
jdbc_password => "password"
jdbc_driver_library => "/opt/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
statement => "SELECT XID FROM V$ACTIVE_SESSION_HISTORY WHERE SAMPLE_TIME > :sql_last_value"
codec => plain { charset => "ASCII-8BIT" }
last_run_metadata_path => "/tmp/logstash-oradb.lastrun"
record_last_run => true
schedule => "*/2 * * * *"
}
}

filter {

     mutate { convert => [ "sample_time" , "string" ]}
     date { match => ["sample_time", "ISO8601"]}
     mutate {  remove_field => [ "force_matching_signature" ]  }
    }

output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index=> "logstash-%{+dd.MM.YYYY}"
}
}

Full Error Logs

An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely
{:error_message=>""\x87" from ASCII-8BIT to UTF-8", :error_class=>"LogStash::Json::GeneratorError",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/json.rb:27:in jruby_dump'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'",
"org/jruby/RubyArray.java:2580:in map'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'",
"org/jruby/RubyArray.java:1814:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in bulk'",
"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:365:in safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:268:in submit'",
"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:236:in retrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:40:in multi_receive'",
"org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'",
"/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"]}

I understood this as the Blob object of Oracle is causing it

{
"xid" => #<Sequel::SQL::Blob:0xea1c0 bytes=8 content="K\x00\e\x003\xBC\x00\x00">,
"@version" => "1",
"@timestamp" => 2020-02-26T20:10:14.105Z
}

I came across a blog that there is a patch to this - I will update if I learn anything by applying the patch

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.