Logstash's elasticsearch plugin scrolId error

Hi,

i'm using the following configuration:

input {
  elasticsearch {
    hosts => ["xxx.xxx.xxx.xxx"]
    index => "index_tmp"
    query => '{ "query": { "query_string": { "query": "*" } } }'
    size => 500
    docinfo => true
  }
}
output {
  elasticsearch {
    hosts => ["localhost"]
    index => "index_tmp"
  }
}

But this error keeps poping out:

[2018-03-06T08:46:35,577][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Elasticsearch hosts=>["xxx.xxx.xxx.xxx"], index=>"index_tmp", query=>"{ \"query\": { \"query_string\": { \"query\": \"*\" } } }", size=>500, docinfo=>true, id=>"fdb3459c6d3568701ad8bf852adc62ee24c0e99026a05270e32d4112a394a4fb", enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_80499b82-9960-431b-b630-1578c9766522", enable_metric=>true, charset=>"UTF-8">, scroll=>"1m", docinfo_target=>"@metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>
  Error: [400] {"error":"ElasticsearchIllegalArgumentException[Failed to decode scrollId]; nested: IOException[Bad Base64 input character decimal 123 in array position 0]; ","status":400}
  Exception: Elasticsearch::Transport::Transport::Errors::BadRequest
  Stack: /opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:202:in `__raise_transport_error'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:319:in `perform_request'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/http/faraday.rb:20:in `perform_request'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/client.rb:131:in `perform_request'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-api-5.0.4/lib/elasticsearch/api/actions/scroll.rb:61:in `scroll'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-elasticsearch-4.2.0/lib/logstash/inputs/elasticsearch.rb:244:in `scroll_request'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-elasticsearch-4.2.0/lib/logstash/inputs/elasticsearch.rb:212:in `process_next_scroll'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-elasticsearch-4.2.0/lib/logstash/inputs/elasticsearch.rb:206:in `do_run'
/opt/logstash-6.2.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-elasticsearch-4.2.0/lib/logstash/inputs/elasticsearch.rb:188:in `run'
/opt/logstash-6.2.2/logstash-core/lib/logstash/pipeline.rb:516:in `inputworker'
/opt/logstash-6.2.2/logstash-core/lib/logstash/pipeline.rb:509:in `block in start_input'

Logstash 6.2.2
Elasticserch 6.1.2
Java 8.161

I'm reindexing an index to a new cluster (ES 6.1.2) from an old version(ES 1.4) and i just copied more documents to the new index than the number os documents that exists in old index from ES version.

Should i copy those documents to an intermediate ES version (2.x) and, then, index into this new cluster?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.