Logstash 6.7 plugin Elasticsearch Error "bignum too big to convert into `long'"

Hi community,

Recently, we've just upgrade the version of logstash, from 6.6 to 6.7. Since then, we are receiving these message:

[ERROR][logstash.outputs.elasticsearch] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"bignum too big to convert into long'", :error_class=>"LogStash::Json::GeneratorError", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/json.rb:27:injruby_dump'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:2577:inmap'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:1792:ineach'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:286:insafe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:191:in submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:159:inretrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:38:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:inmulti_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:390:inblock in output_batch'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:389:inoutput_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:341:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:304:inblock in start_workers'"]}

Our deployment is:

  • logstash 6.7.0
  • Elasticsearch - Elastic Cloud 6.7

The errors appears when we set up the following Queuing Settings:

queue.type: persisted
path.queue: /var/lib/logstash/queue
queue.page_capacity: 64mb
queue.max_events: 0
queue.max_bytes: 1024mb
queue.checkpoint.acks: 1024
queue.checkpoint.writes: 1024
queue.checkpoint.interval: 1000
dead_letter_queue.enable: false
dead_letter_queue.max_bytes: 1024mb
path.dead_letter_queue:/var/lib/logstash/dead_letter_queue

Anyone know what is wrong of these settings?

Thank you,

Best Regards

I am getting exactly the same message, but my deployment is on Docker, and seems to be confined to data received from metricbeats via Kafka.

We are not using persistent queueing at all, though.

I am attempting to diagnose the problem further.

I have two situations that cause this error to occur. The first was a pipeline that was ingesting cloned events from another Elasticsearch cluster. The "id" field from the original cluster was being passed along to the new cluster, and the Logstash jruby instance was identifying it as a Bignum and trying (unsuccessfully) to cast it to a Long.

I now remove that field as part of the Logstash pipeline, and that pipeline no longer generates this error.

The other pipeline I'm having trouble with is for Metricbeats. I use Kafka (Confluent 4.1.2) to buffer metricbeats data coming from about 2700 servers, and some datum in those events is causing the Bignum / Long casting error and blocking all Elasticsearch indexing.

My workaround for this was to switch metricbeats output from Kafka to sending directly to the Elasticsearch HTTP ports, but I will be trying to go back to using Kafka so I expect to spend some more time looking into this problem.

I got the same issue as yours and I was upgrading from 6.5.0 to 6.7.0
I have 2 logstash instances in my ELK clusters but only 1 logstash server is throwing this errors.
No idea how to solve this...:frowning:

[ERROR][logstash.outputs.elasticsearch] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"bignum too big to convert into long'", :error_class=>"LogStash::Json::GeneratorError", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/json.rb:27:injruby_dump'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:2577:inmap'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:1792:ineach'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:286:insafe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:191:in submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:159:inretrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:38:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:inmulti_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:390:inblock in output_batch'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:389:inoutput_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:341:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:304:inblock in start_workers'"]}

Update the strange issue... Now, our logstash server is running without persistent queueing and the errors have come back.

@jeffkirk1, how did you realize about the problematic field?

Thank you,

Best Regards

@alfonso.viso I didn't really resolve the error, I just worked around it by sending Metricbeat data directly to Elasticsearch instead of through Kafka and Logstash.

I really want this to be fixed because I have several work flows that require me to perform lookups and add data to the Metricbeats data that can't be done with Elasticsearch ingest processors. I'll be keeping an eye out for Elasticsearch updates.

I'm seeing in logstash 6.7.1. Same field in 6.6.2, and you get an exception:

[2019-04-09T18:09:05,118][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"cori-sedc-power_bp-2019.04.10", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x413a60c6>], :response=>{"index"=>{"_index"=>"cori-sedc-power_bp-2019.04.10", "_type"=>"doc", "_id"=>"vHfKBGoB_7J-twolj_E0", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [value] of type [long] in document with id 'vHfKBGoB_7J-twolj_E0'", "caused_by"=>{"type"=>"i_o_exception", "reason"=>"Numeric value (18446744073708165143) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@16d9ee52; line: 1, column: 183]"}}}}}

in logstash 6.7.1, you get this:

[2019-04-09T18:14:49,728][ERROR][logstash.outputs.elasticsearch] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"bignum too big to convert into `long'", :error_class=>"LogStash::Json::GeneratorError", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/json.rb:27:in `jruby_dump'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in `block in bulk'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in `block in bulk'", "org/jruby/RubyArray.java:1792:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in `bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:286:in `safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:191:in `submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:159:in `retrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:38:in `multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:390:in `block in output_batch'", "org/jruby/RubyHash.java:1419:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:389:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:341:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:304:in `block in start_workers'"]}

which eventually hangs logstash.

Yep, same here, upgraded from Logstash 6.6.0 to Logstash 6.7.1 got the same message:

[2019-04-10T13:36:15,692][ERROR][logstash.outputs.elasticsearch] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>"bignum too big to convert into long'", :error_class=>"LogStash::Json::GeneratorError", :backt race=>["/usr/share/logstash/logstash-core/lib/logstash/json.rb:27:in jruby_dump'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArra y.java:2577:in map'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:1792:in each'", "/usr/share/logstash/vendor/bundle/jr
uby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:286:in saf
e_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:191:in submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash /outputs/elasticsearch/common.rb:159:in retrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:38:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyEx t.java:118:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:390:in block in output_batch'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/lo gstash/logstash-core/lib/logstash/pipeline.rb:389:in output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:341:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:304:in block in start_workers'"]}

also tried to upgrade the 'logstash-output-elasticsearch' from 9.4.0 to 10.0.1 but the error still exists.
tried practically everything, solved it by returning to Logstash v6.6.0 :frowning:

After upgrading Logstash, Elasticsearch and all Beats to version 6.7.1 yesterday, I am no longer experiencing this problem. I was able to return my work flows to their original state and they're all working well now. I hope this helps.

Hi all,

Yesterday we upgraded the versionf of Logstash to 6.7.1 and like jeffkirk1, the problem has disappeared. Thank you very much for your help.

Hi all, this week , we upgraded to logstash 7.0.0 and the problems come back. Is there a hotfix penging for it ? Thanks you

Hi all,

I've just found this url where they comment the issue and explain how to fix it before they launch a new version of the logstash output elasticsearch plugin.

I've just applied the "patch" into one of our logstash server. If the issue is fixed, I will inform you.

Thank you,

Regards

Thanks for all @alfonso.viso . Let me know if patch works.

Hello @Miguel_Gomez_Cuesta it seems that the patch is working. Since we applied it, the problem was solved.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.