I am practicing with Logstash and I can't understand the source of this error. I want to modify an index by removing a field and renaming one. I'm on Logstash 5.6.2
Elastic output plugin is 7.4.0
This is an example of my entries:
{
"_index": "test-shakespeare-italian",
"_type": "logs",
"_id": "AV8QsD5pzoUbREG_8icr",
"_score": 1,
"_source": {
"nome_opera": "Henry IV",
"battuta_numero": "3.2.9",
"battuta": "Make me believe that thou art only markd",
"@timestamp": "2017-10-12T13:05:06.519Z",
"id_battuta": 1841,
"dialogo_numero": 1,
"@version": "1",
"interlocutore": "KING HENRY IV"
}
}
This is my pipeline:
# this pipeline updates an existing index by
# changing name of field "nome_opera" in "opera"
# dropping field "dialogo numero"
input {
elasticsearch {
hosts => "localhost"
index => "test-shakespeare-italian"
query => '{"query": {"match_all": {}}}'
}
}
filter {
mutate {
rename => { "nome_opera" => "opera"}
remove_field => "dialogo_numero"
}
}
output {
elasticsearch {
index => "test-shakespeare-italian"
action => "update"
document_id => "OOW-j2DeSCmnsVWVsywOVQ"
hosts => "localhost:9200"
version => "1"
}
#stdout { codec => rubydebug }
}
pipeline fails with the following error:
[2017-10-13T11:38:39,467][ERROR][logstash.outputs.elasticsearch] Encountered an unexpected error submitting a bulk request! Will retry. {:error_message=>"undefined method `sanitized' for \"http://localhost:9200/_bulk\":String", :class=>"NoMethodError", :backtrace=>["/usr/local/Cellar/logstash/5.6.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:249:in `safe_bulk'", "/usr/local/Cellar/logstash/5.6.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:119:in `submit'", "/usr/local/Cellar/logstash/5.6.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:87:in `retrying_submit'", "/usr/local/Cellar/logstash/5.6.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:38:in `multi_receive'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:in `multi_receive'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/output_delegator.rb:49:in `multi_receive'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:436:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:435:in `output_batch'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:381:in `worker_loop'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:342:in `start_workers'"]}
- I am sure that the document_id is correct.
- The pipeline works perfectly if instead of updating the index I create a new one
- --config.test_and_exit gives OK
Online I found this Logstash Encountered an unexpected error submitting a bulk request! "undefined method `sanitized'
and this https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/612
Which leads me to think this is a bug, not my fault. Unfortunately no real solution is provided in those links, do I have to revert to an older version of logstash? Any other ideas?