":error_message=>"undefined method `sanitized'"

Hi,

My Elastic Stack solution works fine until, seemingly at random, logstash instances starts to log this exception and stops inputting into the elastic search cluster.

[2017-10-16T13:00:57,215][ERROR][logstash.outputs.elasticsearch] Encountered an unexpected error submitting a bulk request! Will retry. {:error_message=>"undefined method `sanitized' for \"http://10.44.129.198:9200/_bulk\":String", :class=>"NoMethodError", :backtrace=>["c:/Work/files/ELK/logstash-5.6.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:249:in `safe_bulk'", "c:/Work/files/ELK/logstash-5.6.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:222:in `safe_bulk'", "c:/Work/files/ELK/logstash-5.6.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:119:in `submit'", "c:/Work/files/ELK/logstash-5.6.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:87:in `retrying_submit'", "c:/Work/files/ELK/logstash-5.6.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:38:in `multi_receive'", "c:/Work/files/ELK/logstash-5.6.2/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:in `multi_receive'", "c:/Work/files/ELK/logstash-5.6.2/logstash-core/lib/logstash/output_delegator.rb:49:in `multi_receive'", "c:/Work/files/ELK/logstash-5.6.2/logstash-core/lib/logstash/pipeline.rb:436:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "c:/Work/files/ELK/logstash-5.6.2/logstash-core/lib/logstash/pipeline.rb:435:in `output_batch'", "c:/Work/files/ELK/logstash-5.6.2/logstash-core/lib/logstash/pipeline.rb:381:in `worker_loop'", "c:/Work/files/ELK/logstash-5.6.2/logstash-core/lib/logstash/pipeline.rb:342:in `start_workers'"]}

Please, what is the cause of this issue and how can I fix it?

Kind Regards,

Hyder

Please show your config and provide the version you are running.

Hi,

Logstash version 5.6.2 and the config is below:

input {
  jdbc {
    jdbc_driver_library => "C:\Program Files\Java\sqljdbc_6.2\enu\mssql-jdbc-6.2.1.jre8.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://xxxxxxxx;databaseName=xxxxxxxx"
	jdbc_validate_connection => true
    jdbc_user => "xxxxxx"
    jdbc_password => "xxxxxx"
    statement => "SELECT * FROM [Database].[dbo].[Table] with (NOLOCK) where [RecordedTime] > '2017-10-13'"
	type => "CustomType"
	jdbc_fetch_size => 15
  }
}
output {
  elasticsearch 
  {
    hosts => ["10.34.29.37:9200","10.34.29.12:9200","10.34.29.50:9200"] 
    index => "test-%{+YYYY.MM.dd}" 
    document_type => "CustomType" 
	document_id => "%{id}"
	http_compression => true
	user => xxxxxxx
    password => xxxxxxx
  }
  stdout { codec => rubydebug }
}

I think it is Elastic Search rejecting the bulk request due to its size but Logstash is retrying the request hence then getting stuck in an endless loop. I came to this conclusion as I upgraded to 5.6.3 and got a different error, more info is here

@warkolm Any update? I found the below question and I replied to it as well as think these two issues are related.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.