Hi folks,
I am using Logstash 7.5.0 to reindex my index from ES 6.8.5 to ES 7.5.0.
I also tried to use Logstash 6.8.5 to do the job, but i get the same error in both versions
I saw some similar posts here, but anyone was related to the slice API, so i decided to create a new topic
The error occurs when i set the slices
on logstash->input->elasticsearch, if it is not set everything works fine, but very slowly.
Here you can see my config file:
input {
elasticsearch {
hosts => ["http://10.0.0.5:9200", "http://10.0.0.6:9200", "http://10.0.0.7:9200"]
index => "report"
query => '{ "query": { "range": { "date": { "lte": "14-12-2019 23:59:59" } } }, "sort": [ "_doc" ] }'
size => 1000
scroll => "1m"
slices => 8
docinfo => true
}
}
filter {
date {
match => [ "date", "dd-MM-yyyy HH:mm:ss" ]
timezone => "America/Sao_Paulo"
}
ruby {
code => "event.set('date', event.get('[@timestamp]').time.localtime.strftime('%Y-%m-%dT%H:%M:%S%:z'))"
remove_field => [ "@timestamp", "@version" ]
}
}
output {
elasticsearch {
manage_template => false
hosts => ["http://127.0.0.1:9200", "http://10.0.0.9:9200"]
index => "report"
document_id => "%{[@metadata][_id]}"
pipeline => "report-daily-index"
}
}
And below the Logstash error:
[2019-12-15T18:20:21,055][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://127.0.0.1:9200", "http://10.0.0.9:9200"]}
[2019-12-15T18:20:21,106][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-12-15T18:20:21,110][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>512, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>4096, "pipeline.sources"=>["/etc/logstash/conf.d/elastic.conf"], :thread=>"#<Thread:0x4581ec2e run>"}
[2019-12-15T18:20:21,342][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2019-12-15T18:20:21,381][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>6, :slices=>8}
[2019-12-15T18:20:21,380][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>3, :slices=>8}
[2019-12-15T18:20:21,384][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>5, :slices=>8}
[2019-12-15T18:20:21,380][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>7, :slices=>8}
[2019-12-15T18:20:21,380][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>0, :slices=>8}
[2019-12-15T18:20:21,380][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>2, :slices=>8}
[2019-12-15T18:20:21,380][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>1, :slices=>8}
[2019-12-15T18:20:21,380][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>4, :slices=>8}
[2019-12-15T18:20:21,479][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-12-15T18:20:21,776][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-12-15T18:20:46,527][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<Elasticsearch::Transport::Transport::Error: Cannot get new connection from pool.>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:254:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/faraday.rb:20:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/client.rb:131:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-5.0.5/lib/elasticsearch/api/actions/scroll.rb:61:in `scroll'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.3.2/lib/logstash/inputs/elasticsearch.rb:276:in `scroll_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.3.2/lib/logstash/inputs/elasticsearch.rb:244:in `process_next_scroll'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.3.2/lib/logstash/inputs/elasticsearch.rb:236:in `do_run_slice'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.3.2/lib/logstash/inputs/elasticsearch.rb:216:in `block in do_run'"]}
[2019-12-15T18:20:46,704][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2019-12-15T18:20:56,333][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.5.0"}
[2019-12-15T18:20:57,558][INFO ][org.reflections.Reflections] Reflections took 23 ms to scan 1 urls, producing 20 keys and 40 values
[2019-12-15T18:20:58,129][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/, http://10.0.0.9:9200/]}}
[2019-12-15T18:20:58,282][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2019-12-15T18:20:58,317][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2019-12-15T18:20:58,321][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-12-15T18:20:58,352][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://10.0.0.9:9200/"}
[2019-12-15T18:20:58,386][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://127.0.0.1:9200", "http://10.0.0.9:9200"]}
[2019-12-15T18:20:58,442][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-12-15T18:20:58,446][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>512, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>4096, "pipeline.sources"=>["/etc/logstash/conf.d/elastic.conf"], :thread=>"#<Thread:0x21db0ba8 run>"}
[2019-12-15T18:20:58,648][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2019-12-15T18:20:58,675][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>6, :slices=>8}
[2019-12-15T18:20:58,675][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>0, :slices=>8}
[2019-12-15T18:20:58,677][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>1, :slices=>8}
[2019-12-15T18:20:58,675][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>3, :slices=>8}
[2019-12-15T18:20:58,675][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>4, :slices=>8}
[2019-12-15T18:20:58,675][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>2, :slices=>8}
[2019-12-15T18:20:58,675][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>7, :slices=>8}
[2019-12-15T18:20:58,675][INFO ][logstash.inputs.elasticsearch][main] Slice starting {:slice_id=>5, :slices=>8}
[2019-12-15T18:20:58,773][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-12-15T18:20:59,045][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I could not see any log errors on elasticsearch input.
I'm not sure if it is a real bug in elasticsearch or if i need to edit any configuration.
I would appreciate if anyone could help me
Thank you,
Gustavo Rodrigues