Pipeline error reader unacceptable code point ' ' (0x0) special characters are not allowed

Hello,
I have a pipeline that was working for almost a year, then 4 days ago we restarted elasticsearch as it was taking so much RAM (exceeding jvm.heap size) and the pipeline stopped working and the below error popped up in logstash logs :

[2024-11-14T14:55:51,984][ERROR][logstash.javapipeline    ][ccslockeduser] Pipeline error {:pipeline_id=>"ccslockeduser", :exception=>#<Psych::SyntaxError: (<unknown>): reader unacceptable code point ' ' (0x0) special characters are not allowed
in "reader", position 0 at line 0 column 0>, :backtrace=>["org/jruby/ext/psych/PsychParser.java:312:in `_native_parse'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/psych-5.1.2-java/lib/psych/parser.rb:62:in `parse'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/psych-5.1.2-java/lib/psych.rb:455:in `parse_stream'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/psych-5.1.2-java/lib/psych.rb:399:in `parse'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/psych-5.1.2-java/lib/psych.rb:323:in `safe_load'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-integration-jdbc-5.4.9/lib/logstash/plugin_mixins/jdbc/value_tracking.rb:39:in `load_yaml'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-integration-jdbc-5.4.9/lib/logstash/plugin_mixins/jdbc/value_tracking.rb:128:in `read'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-integration-jdbc-5.4.9/lib/logstash/plugin_mixins/jdbc/value_tracking.rb:61:in `common_set_initial'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-integration-jdbc-5.4.9/lib/logstash/plugin_mixins/jdbc/value_tracking.rb:75:in `set_initial'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-integration-jdbc-5.4.9/lib/logstash/plugin_mixins/jdbc/value_tracking.rb:33:in `initialize'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-integration-jdbc-5.4.9/lib/logstash/plugin_mixins/jdbc/value_tracking.rb:17:in `build_last_value_tracker'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-integration-jdbc-5.4.9/lib/logstash/inputs/jdbc.rb:285:in `register'", "D:/ELK_8.13.2/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/logstash-mixin-ecs_compatibility_support-1.3.0-java/lib/logstash/plugin_mixins/ecs_compatibility_support/target_check.rb:48:in `register'", "D:/ELK_8.13.2/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:237:in `block in register_plugins'", "org/jruby/RubyArray.java:1989:in `each'", "D:/ELK_8.13.2/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:236:in `register_plugins'", "D:/ELK_8.13.2/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:395:in `start_inputs'", "D:/ELK_8.13.2/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:320:in `start_workers'", "D:/ELK_8.13.2/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:194:in `run'", "D:/ELK_8.13.2/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:146:in `block in start'"], "pipeline.sources"=>["D:/ELK_8.13.2/logstash-8.13.2/config/conf.d/lockeduser_config.conf"], :thread=>"#<Thread:0xea51646 D:/ELK_8.13.2/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}

I have other pipelines with the same configuration (same database but different table) and are working just fine

pipeline configuration :

input {
    jdbc {
        jdbc_driver_library => "D:\ELK_8.13.2\logstash-conf\mssql-jdbc-12.2.0.jre8.jar"
        jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
        jdbc_connection_string => "jdbc:sqlserver://****\*DatabaseName*;encrypt=true;trustServerCertificate=true;"
        jdbc_user => "****"
		jdbc_password => "****"
		jdbc_validate_connection => true
		jdbc_default_timezone => "Asia/Riyadh"
		jdbc_validation_timeout => 120
        schedule => "*/10 * * * *"
		connection_retry_attempts => 3
		last_run_metadata_path => "D:/ELK_8.13.2/logstash-8.13.2/config/metadata/lockeduser-lastrun" 
        statement => " 
		SELECT 'Locked Machine' as 'SERVICE_NAME', ID, STATUS_ID, CASE STATUS_ID WHEN '205' THEN 'Success' ELSE 'Failure' END AS 'STATUS' , CREATION_TIME AS CALL_START_TIME,
					ACCOUNT_NAME, MACHINE_NAME, MOBILE, MOBILE AS 'SERVED_NUMBER', ARABIC_NAME, GIVEN_NAME, PRIMARY_DOMAIN, DOMAIN_CNTROLLER, EXCEPTION,
					UPDATE_TIMESTAMP, TIMESTAMP
						FROM [ACTIVE_DIRECTORY].[ACCOUNT].[LOCKED_MACHINE]
						WHERE  ID > 0
					"
        use_column_value => true
		tracking_column => "id"
		tracking_column_type => "numeric"
		tags => ["lockeduser"]
    }
}
filter {
}
output{
    elasticsearch {	
        hosts => ["****"]
		index => "bk_lockeduserindex-%{+YYYY}"
		document_id => "%{id}"
		user => "elastic"
		password => "****"
		ssl => true
		ssl_certificate_verification => false
    }
   
}

Psych is the library used to load and parse YAML files, in this case the last_run_metadata.

Take a look at "D:/ELK_8.13.2/logstash-8.13.2/config/metadata/lockeduser-lastrun" and fix it, or, if necessary, remove it. This will reset the plugin's memory of what it has read but will allow the plugin to start.

Edited to add: It is not clear why you are using a tracking column and recording last run metadata when your query does not use :sql_last_value.

1 Like

That solved it, thanks alot!
Just a question, how did you know that it was the last_run and not the pipeline itself? and why was it working then suddenly stopped?

Note : I wanted to update all records , thats why i was not using the :sql_last_value at that time.

The stacktrace of the exception shows that it happened in the code that loads the last run metadata.

1 Like